00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 627 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3287 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.116 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.148 Using shallow fetch with depth 1 00:00:00.148 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.148 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.204 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.204 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.476 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.490 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.504 Checking out Revision 16485855f227725e8e9566ee24d00b82aaeff0db (FETCH_HEAD) 00:00:05.504 > git config core.sparsecheckout # timeout=10 00:00:05.516 > git read-tree -mu HEAD # timeout=10 00:00:05.533 > git checkout -f 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=5 00:00:05.554 Commit message: "ansible/inventory: fix WFP37 mac address" 00:00:05.554 > git rev-list --no-walk 16485855f227725e8e9566ee24d00b82aaeff0db # timeout=10 00:00:05.643 [Pipeline] Start of Pipeline 00:00:05.660 [Pipeline] library 00:00:05.663 Loading library shm_lib@master 00:00:05.663 Library shm_lib@master is cached. Copying from home. 00:00:05.679 [Pipeline] node 00:29:39.949 Still waiting to schedule task 00:29:39.963 Waiting for next available executor on ‘vagrant-vm-host’ 00:55:31.608 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:55:31.610 [Pipeline] { 00:55:31.624 [Pipeline] catchError 00:55:31.626 [Pipeline] { 00:55:31.638 [Pipeline] wrap 00:55:31.649 [Pipeline] { 00:55:31.659 [Pipeline] stage 00:55:31.662 [Pipeline] { (Prologue) 00:55:31.693 [Pipeline] echo 00:55:31.695 Node: VM-host-SM16 00:55:31.705 [Pipeline] cleanWs 00:55:31.717 [WS-CLEANUP] Deleting project workspace... 00:55:31.717 [WS-CLEANUP] Deferred wipeout is used... 00:55:31.723 [WS-CLEANUP] done 00:55:31.908 [Pipeline] setCustomBuildProperty 00:55:32.001 [Pipeline] httpRequest 00:55:32.020 [Pipeline] echo 00:55:32.021 Sorcerer 10.211.164.101 is alive 00:55:32.028 [Pipeline] httpRequest 00:55:32.032 HttpMethod: GET 00:55:32.032 URL: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:55:32.032 Sending request to url: http://10.211.164.101/packages/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:55:32.033 Response Code: HTTP/1.1 200 OK 00:55:32.034 Success: Status code 200 is in the accepted range: 200,404 00:55:32.034 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:55:32.319 [Pipeline] sh 00:55:32.595 + tar --no-same-owner -xf jbp_16485855f227725e8e9566ee24d00b82aaeff0db.tar.gz 00:55:32.612 [Pipeline] httpRequest 00:55:32.629 [Pipeline] echo 00:55:32.631 Sorcerer 10.211.164.101 is alive 00:55:32.640 [Pipeline] httpRequest 00:55:32.644 HttpMethod: GET 00:55:32.645 URL: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:55:32.645 Sending request to url: http://10.211.164.101/packages/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:55:32.646 Response Code: HTTP/1.1 200 OK 00:55:32.646 Success: Status code 200 is in the accepted range: 200,404 00:55:32.646 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:55:35.963 [Pipeline] sh 00:55:36.242 + tar --no-same-owner -xf spdk_8fb860b73ad34fc27668faa56efd7776760ea187.tar.gz 00:55:39.535 [Pipeline] sh 00:55:39.809 + git -C spdk log --oneline -n5 00:55:39.809 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:55:39.809 89648519b bdev/compress: Output the pm_path entry for bdev_get_bdevs() 00:55:39.809 a1a2e2b48 nvme/pcie: add debug print for number of SGL/PRP entries 00:55:39.809 8b5c4be8b nvme/fio_plugin: add support for the disable_pcie_sgl_merge option 00:55:39.809 e431ba2e4 nvme/pcie: add disable_pcie_sgl_merge option 00:55:39.826 [Pipeline] withCredentials 00:55:39.836 > git --version # timeout=10 00:55:39.846 > git --version # 'git version 2.39.2' 00:55:39.860 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:55:39.862 [Pipeline] { 00:55:39.871 [Pipeline] retry 00:55:39.873 [Pipeline] { 00:55:39.889 [Pipeline] sh 00:55:40.164 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:55:40.740 [Pipeline] } 00:55:40.762 [Pipeline] // retry 00:55:40.767 [Pipeline] } 00:55:40.786 [Pipeline] // withCredentials 00:55:40.795 [Pipeline] httpRequest 00:55:40.809 [Pipeline] echo 00:55:40.810 Sorcerer 10.211.164.101 is alive 00:55:40.817 [Pipeline] httpRequest 00:55:40.821 HttpMethod: GET 00:55:40.821 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:55:40.822 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:55:40.822 Response Code: HTTP/1.1 200 OK 00:55:40.823 Success: Status code 200 is in the accepted range: 200,404 00:55:40.823 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:55:42.053 [Pipeline] sh 00:55:42.325 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:55:44.231 [Pipeline] sh 00:55:44.582 + git -C dpdk log --oneline -n5 00:55:44.582 eeb0605f11 version: 23.11.0 00:55:44.582 238778122a doc: update release notes for 23.11 00:55:44.582 46aa6b3cfc doc: fix description of RSS features 00:55:44.582 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:55:44.582 7e421ae345 devtools: support skipping forbid rule check 00:55:44.603 [Pipeline] writeFile 00:55:44.621 [Pipeline] sh 00:55:44.900 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:55:44.912 [Pipeline] sh 00:55:45.191 + cat autorun-spdk.conf 00:55:45.191 SPDK_RUN_FUNCTIONAL_TEST=1 00:55:45.191 SPDK_TEST_NVMF=1 00:55:45.191 SPDK_TEST_NVMF_TRANSPORT=tcp 00:55:45.191 SPDK_TEST_USDT=1 00:55:45.191 SPDK_RUN_UBSAN=1 00:55:45.191 SPDK_TEST_NVMF_MDNS=1 00:55:45.191 NET_TYPE=virt 00:55:45.191 SPDK_JSONRPC_GO_CLIENT=1 00:55:45.191 SPDK_TEST_NATIVE_DPDK=v23.11 00:55:45.191 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:55:45.191 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:55:45.197 RUN_NIGHTLY=1 00:55:45.200 [Pipeline] } 00:55:45.217 [Pipeline] // stage 00:55:45.234 [Pipeline] stage 00:55:45.236 [Pipeline] { (Run VM) 00:55:45.251 [Pipeline] sh 00:55:45.528 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:55:45.529 + echo 'Start stage prepare_nvme.sh' 00:55:45.529 Start stage prepare_nvme.sh 00:55:45.529 + [[ -n 0 ]] 00:55:45.529 + disk_prefix=ex0 00:55:45.529 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:55:45.529 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:55:45.529 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:55:45.529 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:55:45.529 ++ SPDK_TEST_NVMF=1 00:55:45.529 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:55:45.529 ++ SPDK_TEST_USDT=1 00:55:45.529 ++ SPDK_RUN_UBSAN=1 00:55:45.529 ++ SPDK_TEST_NVMF_MDNS=1 00:55:45.529 ++ NET_TYPE=virt 00:55:45.529 ++ SPDK_JSONRPC_GO_CLIENT=1 00:55:45.529 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:55:45.529 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:55:45.529 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:55:45.529 ++ RUN_NIGHTLY=1 00:55:45.529 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:55:45.529 + nvme_files=() 00:55:45.529 + declare -A nvme_files 00:55:45.529 + backend_dir=/var/lib/libvirt/images/backends 00:55:45.529 + nvme_files['nvme.img']=5G 00:55:45.529 + nvme_files['nvme-cmb.img']=5G 00:55:45.529 + nvme_files['nvme-multi0.img']=4G 00:55:45.529 + nvme_files['nvme-multi1.img']=4G 00:55:45.529 + nvme_files['nvme-multi2.img']=4G 00:55:45.529 + nvme_files['nvme-openstack.img']=8G 00:55:45.529 + nvme_files['nvme-zns.img']=5G 00:55:45.529 + (( SPDK_TEST_NVME_PMR == 1 )) 00:55:45.529 + (( SPDK_TEST_FTL == 1 )) 00:55:45.529 + (( SPDK_TEST_NVME_FDP == 1 )) 00:55:45.529 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:55:45.529 + for nvme in "${!nvme_files[@]}" 00:55:45.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:55:45.529 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:55:45.529 + for nvme in "${!nvme_files[@]}" 00:55:45.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:55:45.529 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:55:45.529 + for nvme in "${!nvme_files[@]}" 00:55:45.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:55:45.529 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:55:45.529 + for nvme in "${!nvme_files[@]}" 00:55:45.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:55:45.529 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:55:45.529 + for nvme in "${!nvme_files[@]}" 00:55:45.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:55:45.529 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:55:45.529 + for nvme in "${!nvme_files[@]}" 00:55:45.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:55:45.529 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:55:45.529 + for nvme in "${!nvme_files[@]}" 00:55:45.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:55:45.786 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:55:45.786 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:55:46.043 + echo 'End stage prepare_nvme.sh' 00:55:46.043 End stage prepare_nvme.sh 00:55:46.054 [Pipeline] sh 00:55:46.330 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:55:46.330 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:55:46.330 00:55:46.330 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:55:46.330 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:55:46.330 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:55:46.330 HELP=0 00:55:46.330 DRY_RUN=0 00:55:46.330 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:55:46.330 NVME_DISKS_TYPE=nvme,nvme, 00:55:46.330 NVME_AUTO_CREATE=0 00:55:46.330 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:55:46.330 NVME_CMB=,, 00:55:46.330 NVME_PMR=,, 00:55:46.330 NVME_ZNS=,, 00:55:46.330 NVME_MS=,, 00:55:46.330 NVME_FDP=,, 00:55:46.330 SPDK_VAGRANT_DISTRO=fedora38 00:55:46.330 SPDK_VAGRANT_VMCPU=10 00:55:46.330 SPDK_VAGRANT_VMRAM=12288 00:55:46.330 SPDK_VAGRANT_PROVIDER=libvirt 00:55:46.330 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:55:46.330 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:55:46.330 SPDK_OPENSTACK_NETWORK=0 00:55:46.330 VAGRANT_PACKAGE_BOX=0 00:55:46.330 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:55:46.330 FORCE_DISTRO=true 00:55:46.330 VAGRANT_BOX_VERSION= 00:55:46.330 EXTRA_VAGRANTFILES= 00:55:46.330 NIC_MODEL=e1000 00:55:46.330 00:55:46.330 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:55:46.330 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:55:49.605 Bringing machine 'default' up with 'libvirt' provider... 00:55:50.170 ==> default: Creating image (snapshot of base box volume). 00:55:50.429 ==> default: Creating domain with the following settings... 00:55:50.429 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721645575_c5e1eb799cbfa2505a9b 00:55:50.429 ==> default: -- Domain type: kvm 00:55:50.429 ==> default: -- Cpus: 10 00:55:50.429 ==> default: -- Feature: acpi 00:55:50.429 ==> default: -- Feature: apic 00:55:50.429 ==> default: -- Feature: pae 00:55:50.429 ==> default: -- Memory: 12288M 00:55:50.429 ==> default: -- Memory Backing: hugepages: 00:55:50.429 ==> default: -- Management MAC: 00:55:50.429 ==> default: -- Loader: 00:55:50.429 ==> default: -- Nvram: 00:55:50.429 ==> default: -- Base box: spdk/fedora38 00:55:50.429 ==> default: -- Storage pool: default 00:55:50.429 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721645575_c5e1eb799cbfa2505a9b.img (20G) 00:55:50.429 ==> default: -- Volume Cache: default 00:55:50.429 ==> default: -- Kernel: 00:55:50.429 ==> default: -- Initrd: 00:55:50.429 ==> default: -- Graphics Type: vnc 00:55:50.429 ==> default: -- Graphics Port: -1 00:55:50.429 ==> default: -- Graphics IP: 127.0.0.1 00:55:50.429 ==> default: -- Graphics Password: Not defined 00:55:50.429 ==> default: -- Video Type: cirrus 00:55:50.429 ==> default: -- Video VRAM: 9216 00:55:50.429 ==> default: -- Sound Type: 00:55:50.429 ==> default: -- Keymap: en-us 00:55:50.429 ==> default: -- TPM Path: 00:55:50.429 ==> default: -- INPUT: type=mouse, bus=ps2 00:55:50.429 ==> default: -- Command line args: 00:55:50.429 ==> default: -> value=-device, 00:55:50.429 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:55:50.429 ==> default: -> value=-drive, 00:55:50.429 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:55:50.429 ==> default: -> value=-device, 00:55:50.429 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:50.429 ==> default: -> value=-device, 00:55:50.429 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:55:50.429 ==> default: -> value=-drive, 00:55:50.429 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:55:50.429 ==> default: -> value=-device, 00:55:50.429 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:50.429 ==> default: -> value=-drive, 00:55:50.429 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:55:50.429 ==> default: -> value=-device, 00:55:50.429 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:50.429 ==> default: -> value=-drive, 00:55:50.429 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:55:50.429 ==> default: -> value=-device, 00:55:50.429 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:50.429 ==> default: Creating shared folders metadata... 00:55:50.429 ==> default: Starting domain. 00:55:52.332 ==> default: Waiting for domain to get an IP address... 00:56:10.448 ==> default: Waiting for SSH to become available... 00:56:10.448 ==> default: Configuring and enabling network interfaces... 00:56:13.729 default: SSH address: 192.168.121.80:22 00:56:13.729 default: SSH username: vagrant 00:56:13.729 default: SSH auth method: private key 00:56:15.626 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:56:22.179 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:56:28.758 ==> default: Mounting SSHFS shared folder... 00:56:30.131 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:56:30.131 ==> default: Checking Mount.. 00:56:31.065 ==> default: Folder Successfully Mounted! 00:56:31.065 ==> default: Running provisioner: file... 00:56:32.002 default: ~/.gitconfig => .gitconfig 00:56:32.260 00:56:32.260 SUCCESS! 00:56:32.260 00:56:32.260 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:56:32.260 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:56:32.260 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:56:32.260 00:56:32.268 [Pipeline] } 00:56:32.285 [Pipeline] // stage 00:56:32.293 [Pipeline] dir 00:56:32.293 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:56:32.295 [Pipeline] { 00:56:32.304 [Pipeline] catchError 00:56:32.305 [Pipeline] { 00:56:32.317 [Pipeline] sh 00:56:32.592 + vagrant ssh-config --host vagrant 00:56:32.592 + sed -ne /^Host/,$p 00:56:32.592 + tee ssh_conf 00:56:35.867 Host vagrant 00:56:35.867 HostName 192.168.121.80 00:56:35.867 User vagrant 00:56:35.867 Port 22 00:56:35.867 UserKnownHostsFile /dev/null 00:56:35.867 StrictHostKeyChecking no 00:56:35.867 PasswordAuthentication no 00:56:35.867 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:56:35.867 IdentitiesOnly yes 00:56:35.867 LogLevel FATAL 00:56:35.867 ForwardAgent yes 00:56:35.867 ForwardX11 yes 00:56:35.867 00:56:35.881 [Pipeline] withEnv 00:56:35.884 [Pipeline] { 00:56:35.897 [Pipeline] sh 00:56:36.170 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:56:36.170 source /etc/os-release 00:56:36.170 [[ -e /image.version ]] && img=$(< /image.version) 00:56:36.170 # Minimal, systemd-like check. 00:56:36.170 if [[ -e /.dockerenv ]]; then 00:56:36.170 # Clear garbage from the node's name: 00:56:36.170 # agt-er_autotest_547-896 -> autotest_547-896 00:56:36.170 # $HOSTNAME is the actual container id 00:56:36.170 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:56:36.170 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:56:36.170 # We can assume this is a mount from a host where container is running, 00:56:36.170 # so fetch its hostname to easily identify the target swarm worker. 00:56:36.170 container="$(< /etc/hostname) ($agent)" 00:56:36.170 else 00:56:36.170 # Fallback 00:56:36.170 container=$agent 00:56:36.170 fi 00:56:36.170 fi 00:56:36.170 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:56:36.170 00:56:36.438 [Pipeline] } 00:56:36.460 [Pipeline] // withEnv 00:56:36.468 [Pipeline] setCustomBuildProperty 00:56:36.487 [Pipeline] stage 00:56:36.489 [Pipeline] { (Tests) 00:56:36.509 [Pipeline] sh 00:56:36.785 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:56:36.795 [Pipeline] sh 00:56:37.074 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:56:37.088 [Pipeline] timeout 00:56:37.088 Timeout set to expire in 40 min 00:56:37.090 [Pipeline] { 00:56:37.104 [Pipeline] sh 00:56:37.378 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:56:37.941 HEAD is now at 8fb860b73 test/dd: check spdk_dd direct link to liburing 00:56:37.950 [Pipeline] sh 00:56:38.264 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:56:38.282 [Pipeline] sh 00:56:38.558 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:56:38.573 [Pipeline] sh 00:56:38.844 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:56:39.102 ++ readlink -f spdk_repo 00:56:39.102 + DIR_ROOT=/home/vagrant/spdk_repo 00:56:39.102 + [[ -n /home/vagrant/spdk_repo ]] 00:56:39.102 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:56:39.102 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:56:39.102 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:56:39.102 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:56:39.102 + [[ -d /home/vagrant/spdk_repo/output ]] 00:56:39.102 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:56:39.102 + cd /home/vagrant/spdk_repo 00:56:39.102 + source /etc/os-release 00:56:39.102 ++ NAME='Fedora Linux' 00:56:39.102 ++ VERSION='38 (Cloud Edition)' 00:56:39.102 ++ ID=fedora 00:56:39.102 ++ VERSION_ID=38 00:56:39.102 ++ VERSION_CODENAME= 00:56:39.102 ++ PLATFORM_ID=platform:f38 00:56:39.102 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:56:39.102 ++ ANSI_COLOR='0;38;2;60;110;180' 00:56:39.102 ++ LOGO=fedora-logo-icon 00:56:39.102 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:56:39.102 ++ HOME_URL=https://fedoraproject.org/ 00:56:39.102 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:56:39.102 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:56:39.102 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:56:39.102 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:56:39.102 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:56:39.102 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:56:39.102 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:56:39.102 ++ SUPPORT_END=2024-05-14 00:56:39.102 ++ VARIANT='Cloud Edition' 00:56:39.102 ++ VARIANT_ID=cloud 00:56:39.102 + uname -a 00:56:39.102 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:56:39.102 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:56:39.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:56:39.360 Hugepages 00:56:39.360 node hugesize free / total 00:56:39.360 node0 1048576kB 0 / 0 00:56:39.360 node0 2048kB 0 / 0 00:56:39.360 00:56:39.360 Type BDF Vendor Device NUMA Driver Device Block devices 00:56:39.618 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:56:39.618 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:56:39.618 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:56:39.618 + rm -f /tmp/spdk-ld-path 00:56:39.618 + source autorun-spdk.conf 00:56:39.618 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:56:39.618 ++ SPDK_TEST_NVMF=1 00:56:39.618 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:56:39.618 ++ SPDK_TEST_USDT=1 00:56:39.618 ++ SPDK_RUN_UBSAN=1 00:56:39.618 ++ SPDK_TEST_NVMF_MDNS=1 00:56:39.618 ++ NET_TYPE=virt 00:56:39.618 ++ SPDK_JSONRPC_GO_CLIENT=1 00:56:39.618 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:56:39.618 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:56:39.618 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:56:39.618 ++ RUN_NIGHTLY=1 00:56:39.618 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:56:39.618 + [[ -n '' ]] 00:56:39.618 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:56:39.618 + for M in /var/spdk/build-*-manifest.txt 00:56:39.618 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:56:39.618 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:56:39.618 + for M in /var/spdk/build-*-manifest.txt 00:56:39.618 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:56:39.618 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:56:39.618 ++ uname 00:56:39.618 + [[ Linux == \L\i\n\u\x ]] 00:56:39.618 + sudo dmesg -T 00:56:39.618 + sudo dmesg --clear 00:56:39.618 + dmesg_pid=6002 00:56:39.618 + sudo dmesg -Tw 00:56:39.618 + [[ Fedora Linux == FreeBSD ]] 00:56:39.618 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:56:39.618 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:56:39.618 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:56:39.618 + [[ -x /usr/src/fio-static/fio ]] 00:56:39.618 + export FIO_BIN=/usr/src/fio-static/fio 00:56:39.618 + FIO_BIN=/usr/src/fio-static/fio 00:56:39.618 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:56:39.618 + [[ ! -v VFIO_QEMU_BIN ]] 00:56:39.618 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:56:39.618 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:56:39.618 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:56:39.618 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:56:39.618 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:56:39.618 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:56:39.618 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:56:39.618 Test configuration: 00:56:39.618 SPDK_RUN_FUNCTIONAL_TEST=1 00:56:39.618 SPDK_TEST_NVMF=1 00:56:39.618 SPDK_TEST_NVMF_TRANSPORT=tcp 00:56:39.618 SPDK_TEST_USDT=1 00:56:39.618 SPDK_RUN_UBSAN=1 00:56:39.618 SPDK_TEST_NVMF_MDNS=1 00:56:39.618 NET_TYPE=virt 00:56:39.618 SPDK_JSONRPC_GO_CLIENT=1 00:56:39.618 SPDK_TEST_NATIVE_DPDK=v23.11 00:56:39.618 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:56:39.618 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:56:39.876 RUN_NIGHTLY=1 10:53:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:39.877 10:53:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:56:39.877 10:53:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:39.877 10:53:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:39.877 10:53:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:39.877 10:53:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:39.877 10:53:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:39.877 10:53:44 -- paths/export.sh@5 -- $ export PATH 00:56:39.877 10:53:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:39.877 10:53:44 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:56:39.877 10:53:44 -- common/autobuild_common.sh@447 -- $ date +%s 00:56:39.877 10:53:44 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721645624.XXXXXX 00:56:39.877 10:53:44 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721645624.jHwgnw 00:56:39.877 10:53:44 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:56:39.877 10:53:44 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:56:39.877 10:53:44 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:56:39.877 10:53:44 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:56:39.877 10:53:44 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:56:39.877 10:53:44 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:56:39.877 10:53:44 -- common/autobuild_common.sh@463 -- $ get_config_params 00:56:39.877 10:53:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:56:39.877 10:53:44 -- common/autotest_common.sh@10 -- $ set +x 00:56:39.877 10:53:44 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:56:39.877 10:53:44 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:56:39.877 10:53:44 -- pm/common@17 -- $ local monitor 00:56:39.877 10:53:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:39.877 10:53:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:39.877 10:53:44 -- pm/common@21 -- $ date +%s 00:56:39.877 10:53:44 -- pm/common@25 -- $ sleep 1 00:56:39.877 10:53:44 -- pm/common@21 -- $ date +%s 00:56:39.877 10:53:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721645624 00:56:39.877 10:53:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721645624 00:56:39.877 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721645624_collect-vmstat.pm.log 00:56:39.877 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721645624_collect-cpu-load.pm.log 00:56:40.810 10:53:45 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:56:40.810 10:53:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:56:40.810 10:53:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:56:40.810 10:53:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:56:40.810 10:53:45 -- spdk/autobuild.sh@16 -- $ date -u 00:56:40.810 Mon Jul 22 10:53:45 AM UTC 2024 00:56:40.810 10:53:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:56:40.810 v24.09-pre-259-g8fb860b73 00:56:40.810 10:53:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:56:40.810 10:53:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:56:40.810 10:53:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:56:40.810 10:53:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:56:40.810 10:53:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:56:40.810 10:53:45 -- common/autotest_common.sh@10 -- $ set +x 00:56:40.810 ************************************ 00:56:40.810 START TEST ubsan 00:56:40.810 ************************************ 00:56:40.810 using ubsan 00:56:40.810 10:53:45 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:56:40.810 00:56:40.810 real 0m0.000s 00:56:40.810 user 0m0.000s 00:56:40.810 sys 0m0.000s 00:56:40.810 10:53:45 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:56:40.810 ************************************ 00:56:40.810 END TEST ubsan 00:56:40.810 ************************************ 00:56:40.810 10:53:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:56:40.810 10:53:45 -- common/autotest_common.sh@1142 -- $ return 0 00:56:40.810 10:53:45 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:56:40.810 10:53:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:56:40.810 10:53:45 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:56:40.810 10:53:45 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:56:40.810 10:53:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:56:40.810 10:53:45 -- common/autotest_common.sh@10 -- $ set +x 00:56:40.810 ************************************ 00:56:40.810 START TEST build_native_dpdk 00:56:40.810 ************************************ 00:56:40.810 10:53:45 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:56:40.810 eeb0605f11 version: 23.11.0 00:56:40.810 238778122a doc: update release notes for 23.11 00:56:40.810 46aa6b3cfc doc: fix description of RSS features 00:56:40.810 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:56:40.810 7e421ae345 devtools: support skipping forbid rule check 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:56:40.810 10:53:45 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:40.810 10:53:45 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:56:40.810 10:53:46 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:56:40.810 patching file config/rte_config.h 00:56:40.810 Hunk #1 succeeded at 60 (offset 1 line). 00:56:40.810 10:53:46 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:56:40.810 10:53:46 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:56:41.068 10:53:46 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:56:41.068 10:53:46 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:56:41.068 patching file lib/pcapng/rte_pcapng.c 00:56:41.068 10:53:46 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:56:41.068 10:53:46 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:56:41.068 10:53:46 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:56:41.068 10:53:46 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:56:41.068 10:53:46 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:56:46.332 The Meson build system 00:56:46.332 Version: 1.3.1 00:56:46.332 Source dir: /home/vagrant/spdk_repo/dpdk 00:56:46.332 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:56:46.332 Build type: native build 00:56:46.332 Program cat found: YES (/usr/bin/cat) 00:56:46.332 Project name: DPDK 00:56:46.332 Project version: 23.11.0 00:56:46.332 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:56:46.332 C linker for the host machine: gcc ld.bfd 2.39-16 00:56:46.332 Host machine cpu family: x86_64 00:56:46.332 Host machine cpu: x86_64 00:56:46.332 Message: ## Building in Developer Mode ## 00:56:46.332 Program pkg-config found: YES (/usr/bin/pkg-config) 00:56:46.332 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:56:46.332 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:56:46.332 Program python3 found: YES (/usr/bin/python3) 00:56:46.332 Program cat found: YES (/usr/bin/cat) 00:56:46.332 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:56:46.332 Compiler for C supports arguments -march=native: YES 00:56:46.332 Checking for size of "void *" : 8 00:56:46.332 Checking for size of "void *" : 8 (cached) 00:56:46.332 Library m found: YES 00:56:46.332 Library numa found: YES 00:56:46.332 Has header "numaif.h" : YES 00:56:46.332 Library fdt found: NO 00:56:46.332 Library execinfo found: NO 00:56:46.333 Has header "execinfo.h" : YES 00:56:46.333 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:56:46.333 Run-time dependency libarchive found: NO (tried pkgconfig) 00:56:46.333 Run-time dependency libbsd found: NO (tried pkgconfig) 00:56:46.333 Run-time dependency jansson found: NO (tried pkgconfig) 00:56:46.333 Run-time dependency openssl found: YES 3.0.9 00:56:46.333 Run-time dependency libpcap found: YES 1.10.4 00:56:46.333 Has header "pcap.h" with dependency libpcap: YES 00:56:46.333 Compiler for C supports arguments -Wcast-qual: YES 00:56:46.333 Compiler for C supports arguments -Wdeprecated: YES 00:56:46.333 Compiler for C supports arguments -Wformat: YES 00:56:46.333 Compiler for C supports arguments -Wformat-nonliteral: NO 00:56:46.333 Compiler for C supports arguments -Wformat-security: NO 00:56:46.333 Compiler for C supports arguments -Wmissing-declarations: YES 00:56:46.333 Compiler for C supports arguments -Wmissing-prototypes: YES 00:56:46.333 Compiler for C supports arguments -Wnested-externs: YES 00:56:46.333 Compiler for C supports arguments -Wold-style-definition: YES 00:56:46.333 Compiler for C supports arguments -Wpointer-arith: YES 00:56:46.333 Compiler for C supports arguments -Wsign-compare: YES 00:56:46.333 Compiler for C supports arguments -Wstrict-prototypes: YES 00:56:46.333 Compiler for C supports arguments -Wundef: YES 00:56:46.333 Compiler for C supports arguments -Wwrite-strings: YES 00:56:46.333 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:56:46.333 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:56:46.333 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:56:46.333 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:56:46.333 Program objdump found: YES (/usr/bin/objdump) 00:56:46.333 Compiler for C supports arguments -mavx512f: YES 00:56:46.333 Checking if "AVX512 checking" compiles: YES 00:56:46.333 Fetching value of define "__SSE4_2__" : 1 00:56:46.333 Fetching value of define "__AES__" : 1 00:56:46.333 Fetching value of define "__AVX__" : 1 00:56:46.333 Fetching value of define "__AVX2__" : 1 00:56:46.333 Fetching value of define "__AVX512BW__" : (undefined) 00:56:46.333 Fetching value of define "__AVX512CD__" : (undefined) 00:56:46.333 Fetching value of define "__AVX512DQ__" : (undefined) 00:56:46.333 Fetching value of define "__AVX512F__" : (undefined) 00:56:46.333 Fetching value of define "__AVX512VL__" : (undefined) 00:56:46.333 Fetching value of define "__PCLMUL__" : 1 00:56:46.333 Fetching value of define "__RDRND__" : 1 00:56:46.333 Fetching value of define "__RDSEED__" : 1 00:56:46.333 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:56:46.333 Fetching value of define "__znver1__" : (undefined) 00:56:46.333 Fetching value of define "__znver2__" : (undefined) 00:56:46.333 Fetching value of define "__znver3__" : (undefined) 00:56:46.333 Fetching value of define "__znver4__" : (undefined) 00:56:46.333 Compiler for C supports arguments -Wno-format-truncation: YES 00:56:46.333 Message: lib/log: Defining dependency "log" 00:56:46.333 Message: lib/kvargs: Defining dependency "kvargs" 00:56:46.333 Message: lib/telemetry: Defining dependency "telemetry" 00:56:46.333 Checking for function "getentropy" : NO 00:56:46.333 Message: lib/eal: Defining dependency "eal" 00:56:46.333 Message: lib/ring: Defining dependency "ring" 00:56:46.333 Message: lib/rcu: Defining dependency "rcu" 00:56:46.333 Message: lib/mempool: Defining dependency "mempool" 00:56:46.333 Message: lib/mbuf: Defining dependency "mbuf" 00:56:46.333 Fetching value of define "__PCLMUL__" : 1 (cached) 00:56:46.333 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:56:46.333 Compiler for C supports arguments -mpclmul: YES 00:56:46.333 Compiler for C supports arguments -maes: YES 00:56:46.333 Compiler for C supports arguments -mavx512f: YES (cached) 00:56:46.333 Compiler for C supports arguments -mavx512bw: YES 00:56:46.333 Compiler for C supports arguments -mavx512dq: YES 00:56:46.333 Compiler for C supports arguments -mavx512vl: YES 00:56:46.333 Compiler for C supports arguments -mvpclmulqdq: YES 00:56:46.333 Compiler for C supports arguments -mavx2: YES 00:56:46.333 Compiler for C supports arguments -mavx: YES 00:56:46.333 Message: lib/net: Defining dependency "net" 00:56:46.333 Message: lib/meter: Defining dependency "meter" 00:56:46.333 Message: lib/ethdev: Defining dependency "ethdev" 00:56:46.333 Message: lib/pci: Defining dependency "pci" 00:56:46.333 Message: lib/cmdline: Defining dependency "cmdline" 00:56:46.333 Message: lib/metrics: Defining dependency "metrics" 00:56:46.333 Message: lib/hash: Defining dependency "hash" 00:56:46.333 Message: lib/timer: Defining dependency "timer" 00:56:46.333 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:56:46.333 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:56:46.333 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:56:46.333 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:56:46.333 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:56:46.333 Message: lib/acl: Defining dependency "acl" 00:56:46.333 Message: lib/bbdev: Defining dependency "bbdev" 00:56:46.333 Message: lib/bitratestats: Defining dependency "bitratestats" 00:56:46.333 Run-time dependency libelf found: YES 0.190 00:56:46.333 Message: lib/bpf: Defining dependency "bpf" 00:56:46.333 Message: lib/cfgfile: Defining dependency "cfgfile" 00:56:46.333 Message: lib/compressdev: Defining dependency "compressdev" 00:56:46.333 Message: lib/cryptodev: Defining dependency "cryptodev" 00:56:46.333 Message: lib/distributor: Defining dependency "distributor" 00:56:46.333 Message: lib/dmadev: Defining dependency "dmadev" 00:56:46.333 Message: lib/efd: Defining dependency "efd" 00:56:46.333 Message: lib/eventdev: Defining dependency "eventdev" 00:56:46.333 Message: lib/dispatcher: Defining dependency "dispatcher" 00:56:46.333 Message: lib/gpudev: Defining dependency "gpudev" 00:56:46.333 Message: lib/gro: Defining dependency "gro" 00:56:46.333 Message: lib/gso: Defining dependency "gso" 00:56:46.333 Message: lib/ip_frag: Defining dependency "ip_frag" 00:56:46.333 Message: lib/jobstats: Defining dependency "jobstats" 00:56:46.333 Message: lib/latencystats: Defining dependency "latencystats" 00:56:46.333 Message: lib/lpm: Defining dependency "lpm" 00:56:46.333 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:56:46.333 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:56:46.333 Fetching value of define "__AVX512IFMA__" : (undefined) 00:56:46.333 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:56:46.333 Message: lib/member: Defining dependency "member" 00:56:46.333 Message: lib/pcapng: Defining dependency "pcapng" 00:56:46.333 Compiler for C supports arguments -Wno-cast-qual: YES 00:56:46.333 Message: lib/power: Defining dependency "power" 00:56:46.333 Message: lib/rawdev: Defining dependency "rawdev" 00:56:46.333 Message: lib/regexdev: Defining dependency "regexdev" 00:56:46.333 Message: lib/mldev: Defining dependency "mldev" 00:56:46.333 Message: lib/rib: Defining dependency "rib" 00:56:46.333 Message: lib/reorder: Defining dependency "reorder" 00:56:46.333 Message: lib/sched: Defining dependency "sched" 00:56:46.333 Message: lib/security: Defining dependency "security" 00:56:46.333 Message: lib/stack: Defining dependency "stack" 00:56:46.333 Has header "linux/userfaultfd.h" : YES 00:56:46.333 Has header "linux/vduse.h" : YES 00:56:46.333 Message: lib/vhost: Defining dependency "vhost" 00:56:46.333 Message: lib/ipsec: Defining dependency "ipsec" 00:56:46.333 Message: lib/pdcp: Defining dependency "pdcp" 00:56:46.333 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:56:46.333 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:56:46.333 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:56:46.333 Compiler for C supports arguments -mavx512bw: YES (cached) 00:56:46.333 Message: lib/fib: Defining dependency "fib" 00:56:46.333 Message: lib/port: Defining dependency "port" 00:56:46.333 Message: lib/pdump: Defining dependency "pdump" 00:56:46.333 Message: lib/table: Defining dependency "table" 00:56:46.333 Message: lib/pipeline: Defining dependency "pipeline" 00:56:46.333 Message: lib/graph: Defining dependency "graph" 00:56:46.333 Message: lib/node: Defining dependency "node" 00:56:46.333 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:56:47.266 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:56:47.266 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:56:47.266 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:56:47.266 Compiler for C supports arguments -Wno-sign-compare: YES 00:56:47.266 Compiler for C supports arguments -Wno-unused-value: YES 00:56:47.266 Compiler for C supports arguments -Wno-format: YES 00:56:47.266 Compiler for C supports arguments -Wno-format-security: YES 00:56:47.266 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:56:47.266 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:56:47.266 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:56:47.266 Compiler for C supports arguments -Wno-unused-parameter: YES 00:56:47.266 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:56:47.266 Compiler for C supports arguments -mavx512f: YES (cached) 00:56:47.266 Compiler for C supports arguments -mavx512bw: YES (cached) 00:56:47.266 Compiler for C supports arguments -march=skylake-avx512: YES 00:56:47.266 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:56:47.266 Has header "sys/epoll.h" : YES 00:56:47.266 Program doxygen found: YES (/usr/bin/doxygen) 00:56:47.266 Configuring doxy-api-html.conf using configuration 00:56:47.266 Configuring doxy-api-man.conf using configuration 00:56:47.266 Program mandb found: YES (/usr/bin/mandb) 00:56:47.266 Program sphinx-build found: NO 00:56:47.266 Configuring rte_build_config.h using configuration 00:56:47.266 Message: 00:56:47.266 ================= 00:56:47.266 Applications Enabled 00:56:47.266 ================= 00:56:47.266 00:56:47.266 apps: 00:56:47.266 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:56:47.266 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:56:47.266 test-pmd, test-regex, test-sad, test-security-perf, 00:56:47.266 00:56:47.266 Message: 00:56:47.266 ================= 00:56:47.266 Libraries Enabled 00:56:47.266 ================= 00:56:47.266 00:56:47.266 libs: 00:56:47.266 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:56:47.266 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:56:47.266 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:56:47.266 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:56:47.266 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:56:47.266 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:56:47.266 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:56:47.266 00:56:47.266 00:56:47.266 Message: 00:56:47.266 =============== 00:56:47.266 Drivers Enabled 00:56:47.266 =============== 00:56:47.266 00:56:47.266 common: 00:56:47.266 00:56:47.266 bus: 00:56:47.266 pci, vdev, 00:56:47.266 mempool: 00:56:47.266 ring, 00:56:47.266 dma: 00:56:47.266 00:56:47.266 net: 00:56:47.266 i40e, 00:56:47.266 raw: 00:56:47.266 00:56:47.266 crypto: 00:56:47.266 00:56:47.266 compress: 00:56:47.266 00:56:47.266 regex: 00:56:47.266 00:56:47.266 ml: 00:56:47.266 00:56:47.266 vdpa: 00:56:47.266 00:56:47.266 event: 00:56:47.266 00:56:47.266 baseband: 00:56:47.266 00:56:47.266 gpu: 00:56:47.266 00:56:47.266 00:56:47.266 Message: 00:56:47.266 ================= 00:56:47.266 Content Skipped 00:56:47.266 ================= 00:56:47.266 00:56:47.266 apps: 00:56:47.266 00:56:47.266 libs: 00:56:47.266 00:56:47.266 drivers: 00:56:47.266 common/cpt: not in enabled drivers build config 00:56:47.267 common/dpaax: not in enabled drivers build config 00:56:47.267 common/iavf: not in enabled drivers build config 00:56:47.267 common/idpf: not in enabled drivers build config 00:56:47.267 common/mvep: not in enabled drivers build config 00:56:47.267 common/octeontx: not in enabled drivers build config 00:56:47.267 bus/auxiliary: not in enabled drivers build config 00:56:47.267 bus/cdx: not in enabled drivers build config 00:56:47.267 bus/dpaa: not in enabled drivers build config 00:56:47.267 bus/fslmc: not in enabled drivers build config 00:56:47.267 bus/ifpga: not in enabled drivers build config 00:56:47.267 bus/platform: not in enabled drivers build config 00:56:47.267 bus/vmbus: not in enabled drivers build config 00:56:47.267 common/cnxk: not in enabled drivers build config 00:56:47.267 common/mlx5: not in enabled drivers build config 00:56:47.267 common/nfp: not in enabled drivers build config 00:56:47.267 common/qat: not in enabled drivers build config 00:56:47.267 common/sfc_efx: not in enabled drivers build config 00:56:47.267 mempool/bucket: not in enabled drivers build config 00:56:47.267 mempool/cnxk: not in enabled drivers build config 00:56:47.267 mempool/dpaa: not in enabled drivers build config 00:56:47.267 mempool/dpaa2: not in enabled drivers build config 00:56:47.267 mempool/octeontx: not in enabled drivers build config 00:56:47.267 mempool/stack: not in enabled drivers build config 00:56:47.267 dma/cnxk: not in enabled drivers build config 00:56:47.267 dma/dpaa: not in enabled drivers build config 00:56:47.267 dma/dpaa2: not in enabled drivers build config 00:56:47.267 dma/hisilicon: not in enabled drivers build config 00:56:47.267 dma/idxd: not in enabled drivers build config 00:56:47.267 dma/ioat: not in enabled drivers build config 00:56:47.267 dma/skeleton: not in enabled drivers build config 00:56:47.267 net/af_packet: not in enabled drivers build config 00:56:47.267 net/af_xdp: not in enabled drivers build config 00:56:47.267 net/ark: not in enabled drivers build config 00:56:47.267 net/atlantic: not in enabled drivers build config 00:56:47.267 net/avp: not in enabled drivers build config 00:56:47.267 net/axgbe: not in enabled drivers build config 00:56:47.267 net/bnx2x: not in enabled drivers build config 00:56:47.267 net/bnxt: not in enabled drivers build config 00:56:47.267 net/bonding: not in enabled drivers build config 00:56:47.267 net/cnxk: not in enabled drivers build config 00:56:47.267 net/cpfl: not in enabled drivers build config 00:56:47.267 net/cxgbe: not in enabled drivers build config 00:56:47.267 net/dpaa: not in enabled drivers build config 00:56:47.267 net/dpaa2: not in enabled drivers build config 00:56:47.267 net/e1000: not in enabled drivers build config 00:56:47.267 net/ena: not in enabled drivers build config 00:56:47.267 net/enetc: not in enabled drivers build config 00:56:47.267 net/enetfec: not in enabled drivers build config 00:56:47.267 net/enic: not in enabled drivers build config 00:56:47.267 net/failsafe: not in enabled drivers build config 00:56:47.267 net/fm10k: not in enabled drivers build config 00:56:47.267 net/gve: not in enabled drivers build config 00:56:47.267 net/hinic: not in enabled drivers build config 00:56:47.267 net/hns3: not in enabled drivers build config 00:56:47.267 net/iavf: not in enabled drivers build config 00:56:47.267 net/ice: not in enabled drivers build config 00:56:47.267 net/idpf: not in enabled drivers build config 00:56:47.267 net/igc: not in enabled drivers build config 00:56:47.267 net/ionic: not in enabled drivers build config 00:56:47.267 net/ipn3ke: not in enabled drivers build config 00:56:47.267 net/ixgbe: not in enabled drivers build config 00:56:47.267 net/mana: not in enabled drivers build config 00:56:47.267 net/memif: not in enabled drivers build config 00:56:47.267 net/mlx4: not in enabled drivers build config 00:56:47.267 net/mlx5: not in enabled drivers build config 00:56:47.267 net/mvneta: not in enabled drivers build config 00:56:47.267 net/mvpp2: not in enabled drivers build config 00:56:47.267 net/netvsc: not in enabled drivers build config 00:56:47.267 net/nfb: not in enabled drivers build config 00:56:47.267 net/nfp: not in enabled drivers build config 00:56:47.267 net/ngbe: not in enabled drivers build config 00:56:47.267 net/null: not in enabled drivers build config 00:56:47.267 net/octeontx: not in enabled drivers build config 00:56:47.267 net/octeon_ep: not in enabled drivers build config 00:56:47.267 net/pcap: not in enabled drivers build config 00:56:47.267 net/pfe: not in enabled drivers build config 00:56:47.267 net/qede: not in enabled drivers build config 00:56:47.267 net/ring: not in enabled drivers build config 00:56:47.267 net/sfc: not in enabled drivers build config 00:56:47.267 net/softnic: not in enabled drivers build config 00:56:47.267 net/tap: not in enabled drivers build config 00:56:47.267 net/thunderx: not in enabled drivers build config 00:56:47.267 net/txgbe: not in enabled drivers build config 00:56:47.267 net/vdev_netvsc: not in enabled drivers build config 00:56:47.267 net/vhost: not in enabled drivers build config 00:56:47.267 net/virtio: not in enabled drivers build config 00:56:47.267 net/vmxnet3: not in enabled drivers build config 00:56:47.267 raw/cnxk_bphy: not in enabled drivers build config 00:56:47.267 raw/cnxk_gpio: not in enabled drivers build config 00:56:47.267 raw/dpaa2_cmdif: not in enabled drivers build config 00:56:47.267 raw/ifpga: not in enabled drivers build config 00:56:47.267 raw/ntb: not in enabled drivers build config 00:56:47.267 raw/skeleton: not in enabled drivers build config 00:56:47.267 crypto/armv8: not in enabled drivers build config 00:56:47.267 crypto/bcmfs: not in enabled drivers build config 00:56:47.267 crypto/caam_jr: not in enabled drivers build config 00:56:47.267 crypto/ccp: not in enabled drivers build config 00:56:47.267 crypto/cnxk: not in enabled drivers build config 00:56:47.267 crypto/dpaa_sec: not in enabled drivers build config 00:56:47.267 crypto/dpaa2_sec: not in enabled drivers build config 00:56:47.267 crypto/ipsec_mb: not in enabled drivers build config 00:56:47.267 crypto/mlx5: not in enabled drivers build config 00:56:47.267 crypto/mvsam: not in enabled drivers build config 00:56:47.267 crypto/nitrox: not in enabled drivers build config 00:56:47.267 crypto/null: not in enabled drivers build config 00:56:47.267 crypto/octeontx: not in enabled drivers build config 00:56:47.267 crypto/openssl: not in enabled drivers build config 00:56:47.267 crypto/scheduler: not in enabled drivers build config 00:56:47.267 crypto/uadk: not in enabled drivers build config 00:56:47.267 crypto/virtio: not in enabled drivers build config 00:56:47.267 compress/isal: not in enabled drivers build config 00:56:47.267 compress/mlx5: not in enabled drivers build config 00:56:47.267 compress/octeontx: not in enabled drivers build config 00:56:47.267 compress/zlib: not in enabled drivers build config 00:56:47.267 regex/mlx5: not in enabled drivers build config 00:56:47.267 regex/cn9k: not in enabled drivers build config 00:56:47.267 ml/cnxk: not in enabled drivers build config 00:56:47.267 vdpa/ifc: not in enabled drivers build config 00:56:47.267 vdpa/mlx5: not in enabled drivers build config 00:56:47.267 vdpa/nfp: not in enabled drivers build config 00:56:47.267 vdpa/sfc: not in enabled drivers build config 00:56:47.267 event/cnxk: not in enabled drivers build config 00:56:47.267 event/dlb2: not in enabled drivers build config 00:56:47.267 event/dpaa: not in enabled drivers build config 00:56:47.267 event/dpaa2: not in enabled drivers build config 00:56:47.267 event/dsw: not in enabled drivers build config 00:56:47.267 event/opdl: not in enabled drivers build config 00:56:47.267 event/skeleton: not in enabled drivers build config 00:56:47.267 event/sw: not in enabled drivers build config 00:56:47.267 event/octeontx: not in enabled drivers build config 00:56:47.267 baseband/acc: not in enabled drivers build config 00:56:47.267 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:56:47.267 baseband/fpga_lte_fec: not in enabled drivers build config 00:56:47.267 baseband/la12xx: not in enabled drivers build config 00:56:47.267 baseband/null: not in enabled drivers build config 00:56:47.267 baseband/turbo_sw: not in enabled drivers build config 00:56:47.267 gpu/cuda: not in enabled drivers build config 00:56:47.267 00:56:47.267 00:56:47.267 Build targets in project: 220 00:56:47.267 00:56:47.267 DPDK 23.11.0 00:56:47.267 00:56:47.267 User defined options 00:56:47.267 libdir : lib 00:56:47.267 prefix : /home/vagrant/spdk_repo/dpdk/build 00:56:47.267 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:56:47.267 c_link_args : 00:56:47.267 enable_docs : false 00:56:47.267 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:56:47.267 enable_kmods : false 00:56:47.267 machine : native 00:56:47.267 tests : false 00:56:47.267 00:56:47.267 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:56:47.267 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:56:47.524 10:53:52 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:56:47.525 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:56:47.525 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:56:47.525 [2/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:56:47.525 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:56:47.525 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:56:47.525 [5/710] Linking static target lib/librte_kvargs.a 00:56:47.781 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:56:47.781 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:56:47.781 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:56:47.781 [9/710] Linking static target lib/librte_log.a 00:56:47.781 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:56:47.781 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:56:48.038 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:56:48.038 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:56:48.038 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:56:48.294 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:56:48.294 [16/710] Linking target lib/librte_log.so.24.0 00:56:48.294 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:56:48.294 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:56:48.550 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:56:48.550 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:56:48.550 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:56:48.550 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:56:48.550 [23/710] Linking target lib/librte_kvargs.so.24.0 00:56:48.551 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:56:48.807 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:56:48.807 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:56:48.807 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:56:48.807 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:56:48.807 [29/710] Linking static target lib/librte_telemetry.a 00:56:48.807 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:56:48.807 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:56:49.064 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:56:49.064 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:56:49.321 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:56:49.321 [35/710] Linking target lib/librte_telemetry.so.24.0 00:56:49.321 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:56:49.321 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:56:49.321 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:56:49.321 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:56:49.321 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:56:49.321 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:56:49.321 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:56:49.321 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:56:49.321 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:56:49.578 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:56:49.835 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:56:49.835 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:56:49.835 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:56:50.092 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:56:50.092 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:56:50.092 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:56:50.092 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:56:50.092 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:56:50.092 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:56:50.349 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:56:50.349 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:56:50.349 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:56:50.349 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:56:50.349 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:56:50.349 [60/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:56:50.349 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:56:50.607 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:56:50.607 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:56:50.607 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:56:50.607 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:56:50.607 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:56:50.864 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:56:50.864 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:56:51.122 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:56:51.122 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:56:51.122 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:56:51.122 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:56:51.122 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:56:51.122 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:56:51.122 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:56:51.122 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:56:51.122 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:56:51.379 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:56:51.380 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:56:51.637 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:56:51.637 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:56:51.637 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:56:51.637 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:56:51.894 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:56:51.894 [85/710] Linking static target lib/librte_ring.a 00:56:51.894 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:56:51.894 [87/710] Linking static target lib/librte_eal.a 00:56:52.168 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:56:52.168 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:56:52.168 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:56:52.433 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:56:52.433 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:56:52.433 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:56:52.433 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:56:52.433 [95/710] Linking static target lib/librte_mempool.a 00:56:52.433 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:56:52.433 [97/710] Linking static target lib/librte_rcu.a 00:56:52.691 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:56:52.691 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:56:52.691 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:56:52.691 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:56:52.948 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:56:52.948 [103/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:56:52.948 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:56:52.948 [105/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:56:53.205 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:56:53.205 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:56:53.205 [108/710] Linking static target lib/librte_mbuf.a 00:56:53.205 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:56:53.205 [110/710] Linking static target lib/librte_net.a 00:56:53.463 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:56:53.463 [112/710] Linking static target lib/librte_meter.a 00:56:53.463 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:56:53.719 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:56:53.720 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:56:53.720 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:56:53.720 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:56:53.720 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:56:53.720 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:56:54.282 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:56:54.282 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:56:54.538 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:56:54.795 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:56:54.795 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:56:54.795 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:56:54.795 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:56:54.795 [127/710] Linking static target lib/librte_pci.a 00:56:54.795 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:56:55.051 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:56:55.051 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:56:55.051 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:56:55.051 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:56:55.051 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:56:55.051 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:56:55.051 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:56:55.051 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:56:55.307 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:56:55.307 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:56:55.307 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:56:55.307 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:56:55.307 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:56:55.564 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:56:55.564 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:56:55.564 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:56:55.564 [145/710] Linking static target lib/librte_cmdline.a 00:56:55.820 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:56:55.820 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:56:55.820 [148/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:56:55.820 [149/710] Linking static target lib/librte_metrics.a 00:56:56.077 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:56:56.333 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:56:56.590 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:56:56.590 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:56:56.590 [154/710] Linking static target lib/librte_timer.a 00:56:56.590 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:56:56.847 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:56:57.104 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:56:57.360 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:56:57.360 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:56:57.360 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:56:57.928 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:56:57.928 [162/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:56:57.928 [163/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:56:57.928 [164/710] Linking static target lib/librte_bitratestats.a 00:56:58.187 [165/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:56:58.187 [166/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:56:58.187 [167/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:56:58.187 [168/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:56:58.187 [169/710] Linking static target lib/librte_hash.a 00:56:58.187 [170/710] Linking static target lib/librte_ethdev.a 00:56:58.187 [171/710] Linking target lib/librte_eal.so.24.0 00:56:58.187 [172/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:56:58.187 [173/710] Linking static target lib/librte_bbdev.a 00:56:58.444 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:56:58.444 [175/710] Linking target lib/librte_ring.so.24.0 00:56:58.444 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:56:58.444 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:56:58.444 [178/710] Linking target lib/librte_meter.so.24.0 00:56:58.701 [179/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:56:58.701 [180/710] Linking target lib/librte_rcu.so.24.0 00:56:58.701 [181/710] Linking target lib/librte_mempool.so.24.0 00:56:58.701 [182/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:56:58.701 [183/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:56:58.701 [184/710] Linking target lib/librte_pci.so.24.0 00:56:58.701 [185/710] Linking target lib/librte_timer.so.24.0 00:56:58.701 [186/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:56:58.701 [187/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:56:58.701 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:56:58.701 [189/710] Linking target lib/librte_mbuf.so.24.0 00:56:58.701 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:56:58.958 [191/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:56:58.958 [192/710] Linking static target lib/acl/libavx512_tmp.a 00:56:58.958 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:56:58.958 [194/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:56:58.958 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:56:58.958 [196/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:56:58.958 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:56:58.958 [198/710] Linking target lib/librte_net.so.24.0 00:56:59.215 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:56:59.215 [200/710] Linking static target lib/librte_acl.a 00:56:59.215 [201/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:56:59.215 [202/710] Linking target lib/librte_cmdline.so.24.0 00:56:59.215 [203/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:56:59.215 [204/710] Linking target lib/librte_hash.so.24.0 00:56:59.215 [205/710] Linking target lib/librte_bbdev.so.24.0 00:56:59.215 [206/710] Linking static target lib/librte_cfgfile.a 00:56:59.471 [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:56:59.471 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:56:59.471 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:56:59.471 [210/710] Linking target lib/librte_acl.so.24.0 00:56:59.728 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:56:59.728 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:56:59.728 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:56:59.728 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:56:59.728 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:56:59.728 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:56:59.984 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:57:00.241 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:57:00.241 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:57:00.241 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:57:00.241 [221/710] Linking static target lib/librte_bpf.a 00:57:00.241 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:57:00.497 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:57:00.497 [224/710] Linking static target lib/librte_compressdev.a 00:57:00.497 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:57:00.497 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:57:00.754 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:57:00.754 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:57:01.011 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:57:01.011 [230/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:01.011 [231/710] Linking static target lib/librte_distributor.a 00:57:01.011 [232/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:57:01.011 [233/710] Linking target lib/librte_compressdev.so.24.0 00:57:01.266 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:57:01.266 [235/710] Linking target lib/librte_distributor.so.24.0 00:57:01.266 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:57:01.266 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:57:01.266 [238/710] Linking static target lib/librte_dmadev.a 00:57:01.830 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:01.830 [240/710] Linking target lib/librte_dmadev.so.24.0 00:57:01.830 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:57:01.830 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:57:02.087 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:57:02.087 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:57:02.087 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:57:02.087 [246/710] Linking static target lib/librte_efd.a 00:57:02.087 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:57:02.087 [248/710] Linking static target lib/librte_cryptodev.a 00:57:02.343 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:57:02.343 [250/710] Linking target lib/librte_efd.so.24.0 00:57:02.343 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:57:02.909 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:57:02.909 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:57:02.909 [254/710] Linking static target lib/librte_dispatcher.a 00:57:03.166 [255/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:57:03.166 [256/710] Linking static target lib/librte_gpudev.a 00:57:03.166 [257/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:03.166 [258/710] Linking target lib/librte_ethdev.so.24.0 00:57:03.166 [259/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:57:03.166 [260/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:57:03.520 [261/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:57:03.520 [262/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:57:03.520 [263/710] Linking target lib/librte_metrics.so.24.0 00:57:03.520 [264/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:57:03.520 [265/710] Linking target lib/librte_bpf.so.24.0 00:57:03.520 [266/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:57:03.520 [267/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:03.520 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:57:03.520 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:57:03.520 [270/710] Linking target lib/librte_bitratestats.so.24.0 00:57:03.520 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:57:03.800 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:57:03.800 [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:57:03.800 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:03.800 [275/710] Linking target lib/librte_gpudev.so.24.0 00:57:04.057 [276/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:57:04.057 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:57:04.057 [278/710] Linking static target lib/librte_eventdev.a 00:57:04.057 [279/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:57:04.057 [280/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:57:04.315 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:57:04.315 [282/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:57:04.315 [283/710] Linking static target lib/librte_gro.a 00:57:04.315 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:57:04.315 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:57:04.572 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:57:04.572 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:57:04.572 [288/710] Linking target lib/librte_gro.so.24.0 00:57:04.572 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:57:04.572 [290/710] Linking static target lib/librte_gso.a 00:57:04.848 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:57:04.848 [292/710] Linking target lib/librte_gso.so.24.0 00:57:04.848 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:57:05.106 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:57:05.106 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:57:05.106 [296/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:57:05.106 [297/710] Linking static target lib/librte_jobstats.a 00:57:05.106 [298/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:57:05.106 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:57:05.363 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:57:05.363 [301/710] Linking static target lib/librte_ip_frag.a 00:57:05.363 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:57:05.363 [303/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:57:05.363 [304/710] Linking static target lib/librte_latencystats.a 00:57:05.363 [305/710] Linking target lib/librte_jobstats.so.24.0 00:57:05.620 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:57:05.620 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:57:05.620 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:57:05.620 [309/710] Linking target lib/librte_latencystats.so.24.0 00:57:05.620 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:57:05.877 [311/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:57:05.877 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:57:05.877 [313/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:57:05.877 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:57:05.877 [315/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:57:05.877 [316/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:57:05.877 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:57:06.442 [318/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:57:06.442 [319/710] Linking static target lib/librte_lpm.a 00:57:06.442 [320/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:06.442 [321/710] Linking target lib/librte_eventdev.so.24.0 00:57:06.442 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:57:06.442 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:57:06.442 [324/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:57:06.699 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:57:06.699 [326/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:57:06.699 [327/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:57:06.699 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:57:06.699 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:57:06.699 [330/710] Linking static target lib/librte_pcapng.a 00:57:06.699 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:57:06.699 [332/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:57:06.699 [333/710] Linking target lib/librte_lpm.so.24.0 00:57:06.957 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:57:06.957 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:57:06.957 [336/710] Linking target lib/librte_pcapng.so.24.0 00:57:07.215 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:57:07.215 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:57:07.215 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:57:07.215 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:57:07.472 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:57:07.472 [342/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:57:07.472 [343/710] Linking static target lib/librte_member.a 00:57:07.472 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:57:07.472 [345/710] Linking static target lib/librte_rawdev.a 00:57:07.472 [346/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:57:07.472 [347/710] Linking static target lib/librte_power.a 00:57:07.729 [348/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:57:07.729 [349/710] Linking static target lib/librte_regexdev.a 00:57:07.729 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:57:07.729 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:57:07.729 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:57:07.985 [353/710] Linking target lib/librte_member.so.24.0 00:57:07.985 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:57:07.985 [355/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:07.985 [356/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:57:07.985 [357/710] Linking static target lib/librte_mldev.a 00:57:07.985 [358/710] Linking target lib/librte_rawdev.so.24.0 00:57:07.985 [359/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:57:08.241 [360/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:57:08.241 [361/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:57:08.241 [362/710] Linking target lib/librte_power.so.24.0 00:57:08.498 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:08.498 [364/710] Linking target lib/librte_regexdev.so.24.0 00:57:08.498 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:57:08.498 [366/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:57:08.754 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:57:08.754 [368/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:57:08.754 [369/710] Linking static target lib/librte_reorder.a 00:57:08.755 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:57:08.755 [371/710] Linking static target lib/librte_rib.a 00:57:08.755 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:57:08.755 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:57:09.011 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:57:09.011 [375/710] Linking static target lib/librte_stack.a 00:57:09.011 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:57:09.011 [377/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:57:09.011 [378/710] Linking static target lib/librte_security.a 00:57:09.011 [379/710] Linking target lib/librte_reorder.so.24.0 00:57:09.268 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:57:09.268 [381/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:57:09.268 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:57:09.268 [383/710] Linking target lib/librte_rib.so.24.0 00:57:09.268 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:09.268 [385/710] Linking target lib/librte_stack.so.24.0 00:57:09.268 [386/710] Linking target lib/librte_mldev.so.24.0 00:57:09.268 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:57:09.524 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:57:09.524 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:57:09.524 [390/710] Linking target lib/librte_security.so.24.0 00:57:09.524 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:57:09.524 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:57:09.782 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:57:09.782 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:57:09.782 [395/710] Linking static target lib/librte_sched.a 00:57:10.039 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:57:10.296 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:57:10.296 [398/710] Linking target lib/librte_sched.so.24.0 00:57:10.296 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:57:10.296 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:57:10.296 [401/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:57:10.553 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:57:10.811 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:57:10.811 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:57:11.068 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:57:11.325 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:57:11.325 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:57:11.325 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:57:11.325 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:57:11.325 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:57:11.582 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:57:11.838 [412/710] Linking static target lib/librte_ipsec.a 00:57:11.839 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:57:12.095 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:57:12.095 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:57:12.095 [416/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:57:12.095 [417/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:57:12.095 [418/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:57:12.095 [419/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:57:12.095 [420/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:57:12.095 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:57:12.095 [422/710] Linking target lib/librte_ipsec.so.24.0 00:57:12.352 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:57:12.916 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:57:12.916 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:57:12.916 [426/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:57:12.916 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:57:13.174 [428/710] Linking static target lib/librte_pdcp.a 00:57:13.174 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:57:13.174 [430/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:57:13.174 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:57:13.174 [432/710] Linking static target lib/librte_fib.a 00:57:13.432 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:57:13.432 [434/710] Linking target lib/librte_pdcp.so.24.0 00:57:13.432 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:57:13.432 [436/710] Linking target lib/librte_fib.so.24.0 00:57:13.690 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:57:13.949 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:57:13.949 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:57:14.206 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:57:14.206 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:57:14.206 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:57:14.463 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:57:14.463 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:57:14.721 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:57:14.721 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:57:14.721 [447/710] Linking static target lib/librte_port.a 00:57:14.978 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:57:14.978 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:57:14.978 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:57:14.978 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:57:15.235 [452/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:57:15.235 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:57:15.235 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:57:15.235 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:57:15.235 [456/710] Linking static target lib/librte_pdump.a 00:57:15.235 [457/710] Linking target lib/librte_port.so.24.0 00:57:15.493 [458/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:57:15.493 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:57:15.493 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:57:15.750 [461/710] Linking target lib/librte_pdump.so.24.0 00:57:15.750 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:57:16.007 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:57:16.007 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:57:16.264 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:57:16.264 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:57:16.264 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:57:16.264 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:57:16.522 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:57:16.522 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:57:16.522 [471/710] Linking static target lib/librte_table.a 00:57:16.779 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:57:16.779 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:57:17.346 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:57:17.346 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:57:17.346 [476/710] Linking target lib/librte_table.so.24.0 00:57:17.346 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:57:17.346 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:57:17.604 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:57:17.863 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:57:17.863 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:57:18.120 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:57:18.120 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:57:18.120 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:57:18.378 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:57:18.378 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:57:18.635 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:57:18.892 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:57:18.892 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:57:18.892 [490/710] Linking static target lib/librte_graph.a 00:57:19.149 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:57:19.149 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:57:19.149 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:57:19.405 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:57:19.405 [495/710] Linking target lib/librte_graph.so.24.0 00:57:19.405 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:57:19.662 [497/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:57:19.662 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:57:19.662 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:57:20.227 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:57:20.227 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:57:20.227 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:57:20.227 [503/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:57:20.227 [504/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:57:20.227 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:57:20.485 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:57:20.743 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:57:20.743 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:57:20.743 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:57:21.000 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:57:21.000 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:57:21.000 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:57:21.000 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:57:21.000 [514/710] Linking static target lib/librte_node.a 00:57:21.258 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:57:21.258 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:57:21.515 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:57:21.515 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:57:21.515 [519/710] Linking target lib/librte_node.so.24.0 00:57:21.515 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:57:21.515 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:57:21.515 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:57:21.515 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:57:21.515 [524/710] Linking static target drivers/librte_bus_vdev.a 00:57:21.786 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:57:21.786 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:57:21.786 [527/710] Linking static target drivers/librte_bus_pci.a 00:57:21.786 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:22.044 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:57:22.044 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:57:22.044 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:57:22.044 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:57:22.044 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:57:22.044 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:57:22.044 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:57:22.302 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:57:22.302 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:57:22.302 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:57:22.302 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:57:22.302 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:57:22.302 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:57:22.302 [542/710] Linking static target drivers/librte_mempool_ring.a 00:57:22.302 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:57:22.560 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:57:22.560 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:57:22.818 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:57:23.076 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:57:23.334 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:57:23.334 [549/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:57:23.592 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:57:23.592 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:57:24.159 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:57:24.418 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:57:24.418 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:57:24.418 [555/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:57:24.418 [556/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:57:24.418 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:57:24.985 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:57:24.985 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:57:24.985 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:57:25.243 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:57:25.243 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:57:25.829 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:57:25.829 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:57:25.829 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:57:26.130 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:57:26.388 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:57:26.646 [568/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:57:26.646 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:57:26.646 [570/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:57:26.646 [571/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:57:26.646 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:57:26.646 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:57:26.902 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:57:27.158 [575/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:57:27.158 [576/710] Linking static target lib/librte_vhost.a 00:57:27.158 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:57:27.158 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:57:27.415 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:57:27.415 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:57:27.415 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:57:27.672 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:57:27.929 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:57:27.929 [584/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:57:27.929 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:57:27.929 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:57:27.929 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:57:27.929 [588/710] Linking static target drivers/librte_net_i40e.a 00:57:27.929 [589/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:57:27.929 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:57:27.929 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:57:28.185 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:57:28.441 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:57:28.441 [594/710] Linking target lib/librte_vhost.so.24.0 00:57:28.441 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:57:28.697 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:57:28.697 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:57:28.697 [598/710] Linking target drivers/librte_net_i40e.so.24.0 00:57:28.954 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:57:29.211 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:57:29.212 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:57:29.212 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:57:29.212 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:57:29.777 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:57:29.777 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:57:29.777 [606/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:57:29.777 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:57:30.036 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:57:30.293 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:57:30.293 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:57:30.293 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:57:30.293 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:57:30.293 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:57:30.551 [614/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:57:30.551 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:57:30.551 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:57:30.551 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:57:30.808 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:57:31.065 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:57:31.065 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:57:31.323 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:57:31.323 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:57:31.580 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:57:32.145 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:57:32.145 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:57:32.145 [626/710] Linking static target lib/librte_pipeline.a 00:57:32.145 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:57:32.403 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:57:32.403 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:57:32.403 [630/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:57:32.660 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:57:32.660 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:57:32.660 [633/710] Linking target app/dpdk-dumpcap 00:57:32.918 [634/710] Linking target app/dpdk-graph 00:57:32.918 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:57:32.918 [636/710] Linking target app/dpdk-pdump 00:57:32.918 [637/710] Linking target app/dpdk-proc-info 00:57:33.175 [638/710] Linking target app/dpdk-test-cmdline 00:57:33.175 [639/710] Linking target app/dpdk-test-acl 00:57:33.175 [640/710] Linking target app/dpdk-test-compress-perf 00:57:33.444 [641/710] Linking target app/dpdk-test-dma-perf 00:57:33.444 [642/710] Linking target app/dpdk-test-crypto-perf 00:57:33.444 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:57:33.444 [644/710] Linking target app/dpdk-test-fib 00:57:33.701 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:57:33.701 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:57:33.701 [647/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:57:33.958 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:57:33.958 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:57:34.215 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:57:34.215 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:57:34.215 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:57:34.215 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:57:34.215 [654/710] Linking target app/dpdk-test-gpudev 00:57:34.472 [655/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:57:34.472 [656/710] Linking target app/dpdk-test-eventdev 00:57:34.472 [657/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:57:35.035 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:57:35.035 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:57:35.035 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:57:35.035 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:57:35.035 [662/710] Linking target app/dpdk-test-flow-perf 00:57:35.035 [663/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:57:35.035 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:57:35.035 [665/710] Linking target lib/librte_pipeline.so.24.0 00:57:35.293 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:57:35.293 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:57:35.551 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:57:35.551 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:57:35.551 [670/710] Linking target app/dpdk-test-bbdev 00:57:35.822 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:57:35.822 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:57:35.822 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:57:36.081 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:57:36.338 [675/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:57:36.338 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:57:36.338 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:57:36.338 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:57:36.601 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:57:36.601 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:57:36.601 [681/710] Linking target app/dpdk-test-mldev 00:57:36.863 [682/710] Linking target app/dpdk-test-pipeline 00:57:37.121 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:57:37.379 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:57:37.379 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:57:37.379 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:57:37.636 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:57:37.636 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:57:37.636 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:57:37.893 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:57:38.150 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:57:38.150 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:57:38.150 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:57:38.409 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:57:38.666 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:57:38.666 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:57:39.232 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:57:39.232 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:57:39.232 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:57:39.232 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:57:39.489 [701/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:57:39.489 [702/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:57:39.489 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:57:39.746 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:57:40.002 [705/710] Linking target app/dpdk-test-sad 00:57:40.002 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:57:40.002 [707/710] Linking target app/dpdk-test-regex 00:57:40.002 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:57:40.566 [709/710] Linking target app/dpdk-testpmd 00:57:40.566 [710/710] Linking target app/dpdk-test-security-perf 00:57:40.566 10:54:45 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:57:40.566 10:54:45 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:57:40.566 10:54:45 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:57:40.822 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:57:40.822 [0/1] Installing files. 00:57:41.085 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:57:41.085 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:57:41.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:57:41.088 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:57:41.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:57:41.089 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.089 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.090 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.686 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.686 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.686 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.686 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:57:41.686 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.686 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:57:41.686 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.686 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:57:41.686 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:57:41.686 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:57:41.686 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.687 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.688 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:57:41.689 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:57:41.689 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:57:41.689 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:57:41.689 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:57:41.689 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:57:41.689 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:57:41.689 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:57:41.689 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:57:41.689 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:57:41.689 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:57:41.689 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:57:41.689 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:57:41.689 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:57:41.689 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:57:41.689 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:57:41.689 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:57:41.689 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:57:41.689 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:57:41.689 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:57:41.689 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:57:41.689 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:57:41.689 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:57:41.689 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:57:41.689 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:57:41.689 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:57:41.689 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:57:41.689 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:57:41.689 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:57:41.689 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:57:41.689 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:57:41.689 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:57:41.689 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:57:41.689 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:57:41.689 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:57:41.689 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:57:41.689 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:57:41.689 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:57:41.689 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:57:41.689 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:57:41.689 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:57:41.689 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:57:41.689 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:57:41.689 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:57:41.689 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:57:41.689 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:57:41.689 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:57:41.689 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:57:41.689 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:57:41.689 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:57:41.689 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:57:41.689 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:57:41.689 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:57:41.689 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:57:41.689 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:57:41.689 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:57:41.689 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:57:41.689 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:57:41.690 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:57:41.690 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:57:41.690 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:57:41.690 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:57:41.690 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:57:41.690 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:57:41.690 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:57:41.690 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:57:41.690 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:57:41.690 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:57:41.690 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:57:41.690 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:57:41.690 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:57:41.690 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:57:41.690 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:57:41.690 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:57:41.690 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:57:41.690 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:57:41.690 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:57:41.690 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:57:41.690 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:57:41.690 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:57:41.690 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:57:41.690 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:57:41.690 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:57:41.690 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:57:41.690 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:57:41.690 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:57:41.690 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:57:41.690 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:57:41.690 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:57:41.690 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:57:41.690 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:57:41.690 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:57:41.690 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:57:41.690 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:57:41.690 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:57:41.690 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:57:41.690 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:57:41.690 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:57:41.690 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:57:41.690 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:57:41.690 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:57:41.690 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:57:41.690 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:57:41.690 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:57:41.690 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:57:41.690 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:57:41.690 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:57:41.690 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:57:41.690 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:57:41.690 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:57:41.690 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:57:41.690 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:57:41.690 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:57:41.690 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:57:41.690 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:57:41.690 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:57:41.690 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:57:41.690 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:57:41.690 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:57:41.690 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:57:41.690 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:57:41.690 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:57:41.690 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:57:41.690 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:57:41.690 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:57:41.690 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:57:41.690 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:57:41.690 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:57:41.690 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:57:41.690 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:57:41.690 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:57:41.690 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:57:41.690 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:57:41.690 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:57:41.690 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:57:41.690 10:54:46 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:57:41.690 ************************************ 00:57:41.690 END TEST build_native_dpdk 00:57:41.690 ************************************ 00:57:41.690 10:54:46 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:57:41.690 00:57:41.690 real 1m0.775s 00:57:41.690 user 7m25.479s 00:57:41.690 sys 1m11.293s 00:57:41.690 10:54:46 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:57:41.690 10:54:46 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:57:41.690 10:54:46 -- common/autotest_common.sh@1142 -- $ return 0 00:57:41.690 10:54:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:57:41.690 10:54:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:57:41.690 10:54:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:57:41.690 10:54:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:57:41.690 10:54:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:57:41.690 10:54:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:57:41.690 10:54:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:57:41.690 10:54:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:57:44.218 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:57:44.218 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:57:44.218 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:57:44.218 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:57:44.783 Using 'verbs' RDMA provider 00:57:57.545 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:58:12.418 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:58:12.418 go version go1.21.1 linux/amd64 00:58:12.418 Creating mk/config.mk...done. 00:58:12.418 Creating mk/cc.flags.mk...done. 00:58:12.418 Type 'make' to build. 00:58:12.418 10:55:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:58:12.418 10:55:15 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:58:12.418 10:55:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:58:12.418 10:55:15 -- common/autotest_common.sh@10 -- $ set +x 00:58:12.418 ************************************ 00:58:12.418 START TEST make 00:58:12.418 ************************************ 00:58:12.418 10:55:15 make -- common/autotest_common.sh@1123 -- $ make -j10 00:58:12.418 make[1]: Nothing to be done for 'all'. 00:58:38.941 CC lib/ut_mock/mock.o 00:58:38.941 CC lib/ut/ut.o 00:58:38.941 CC lib/log/log.o 00:58:38.941 CC lib/log/log_flags.o 00:58:38.941 CC lib/log/log_deprecated.o 00:58:38.941 LIB libspdk_ut.a 00:58:38.941 LIB libspdk_log.a 00:58:38.941 LIB libspdk_ut_mock.a 00:58:38.941 SO libspdk_ut.so.2.0 00:58:38.941 SO libspdk_ut_mock.so.6.0 00:58:38.941 SO libspdk_log.so.7.0 00:58:38.941 SYMLINK libspdk_ut.so 00:58:38.941 SYMLINK libspdk_ut_mock.so 00:58:38.941 SYMLINK libspdk_log.so 00:58:38.941 CC lib/util/base64.o 00:58:38.941 CC lib/util/bit_array.o 00:58:38.941 CC lib/ioat/ioat.o 00:58:38.941 CC lib/util/crc16.o 00:58:38.941 CC lib/util/cpuset.o 00:58:38.941 CC lib/util/crc32.o 00:58:38.941 CC lib/util/crc32c.o 00:58:38.941 CC lib/dma/dma.o 00:58:38.941 CXX lib/trace_parser/trace.o 00:58:38.941 CC lib/vfio_user/host/vfio_user_pci.o 00:58:38.941 CC lib/util/crc32_ieee.o 00:58:38.941 CC lib/util/crc64.o 00:58:38.941 CC lib/vfio_user/host/vfio_user.o 00:58:38.941 CC lib/util/dif.o 00:58:38.941 LIB libspdk_dma.a 00:58:38.941 CC lib/util/fd.o 00:58:38.941 SO libspdk_dma.so.4.0 00:58:38.941 CC lib/util/fd_group.o 00:58:38.941 LIB libspdk_ioat.a 00:58:38.941 SYMLINK libspdk_dma.so 00:58:38.941 CC lib/util/file.o 00:58:38.941 CC lib/util/hexlify.o 00:58:38.941 CC lib/util/iov.o 00:58:38.941 SO libspdk_ioat.so.7.0 00:58:38.941 CC lib/util/math.o 00:58:38.941 CC lib/util/net.o 00:58:38.942 SYMLINK libspdk_ioat.so 00:58:38.942 CC lib/util/pipe.o 00:58:38.942 LIB libspdk_vfio_user.a 00:58:38.942 CC lib/util/strerror_tls.o 00:58:38.942 SO libspdk_vfio_user.so.5.0 00:58:38.942 CC lib/util/string.o 00:58:38.942 CC lib/util/uuid.o 00:58:38.942 SYMLINK libspdk_vfio_user.so 00:58:38.942 CC lib/util/xor.o 00:58:38.942 CC lib/util/zipf.o 00:58:38.942 LIB libspdk_util.a 00:58:38.942 SO libspdk_util.so.10.0 00:58:38.942 SYMLINK libspdk_util.so 00:58:38.942 LIB libspdk_trace_parser.a 00:58:38.942 SO libspdk_trace_parser.so.5.0 00:58:38.942 CC lib/json/json_parse.o 00:58:38.942 CC lib/json/json_util.o 00:58:38.942 CC lib/json/json_write.o 00:58:38.942 SYMLINK libspdk_trace_parser.so 00:58:38.942 CC lib/vmd/vmd.o 00:58:38.942 CC lib/vmd/led.o 00:58:38.942 CC lib/rdma_provider/common.o 00:58:38.942 CC lib/idxd/idxd.o 00:58:38.942 CC lib/conf/conf.o 00:58:38.942 CC lib/env_dpdk/env.o 00:58:38.942 CC lib/rdma_utils/rdma_utils.o 00:58:38.942 CC lib/env_dpdk/memory.o 00:58:38.942 LIB libspdk_conf.a 00:58:38.942 CC lib/rdma_provider/rdma_provider_verbs.o 00:58:38.942 CC lib/idxd/idxd_user.o 00:58:38.942 CC lib/idxd/idxd_kernel.o 00:58:38.942 SO libspdk_conf.so.6.0 00:58:38.942 LIB libspdk_json.a 00:58:38.942 LIB libspdk_rdma_utils.a 00:58:38.942 SO libspdk_rdma_utils.so.1.0 00:58:38.942 SO libspdk_json.so.6.0 00:58:38.942 SYMLINK libspdk_conf.so 00:58:38.942 CC lib/env_dpdk/pci.o 00:58:38.942 SYMLINK libspdk_rdma_utils.so 00:58:38.942 CC lib/env_dpdk/init.o 00:58:38.942 SYMLINK libspdk_json.so 00:58:38.942 CC lib/env_dpdk/threads.o 00:58:38.942 LIB libspdk_rdma_provider.a 00:58:38.942 SO libspdk_rdma_provider.so.6.0 00:58:38.942 CC lib/env_dpdk/pci_ioat.o 00:58:38.942 SYMLINK libspdk_rdma_provider.so 00:58:38.942 CC lib/env_dpdk/pci_virtio.o 00:58:38.942 CC lib/env_dpdk/pci_vmd.o 00:58:38.942 LIB libspdk_idxd.a 00:58:38.942 SO libspdk_idxd.so.12.0 00:58:38.942 CC lib/jsonrpc/jsonrpc_server.o 00:58:38.942 CC lib/env_dpdk/pci_idxd.o 00:58:38.942 LIB libspdk_vmd.a 00:58:38.942 SO libspdk_vmd.so.6.0 00:58:38.942 SYMLINK libspdk_idxd.so 00:58:38.942 CC lib/env_dpdk/pci_event.o 00:58:38.942 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:58:38.942 CC lib/jsonrpc/jsonrpc_client.o 00:58:38.942 CC lib/env_dpdk/sigbus_handler.o 00:58:38.942 SYMLINK libspdk_vmd.so 00:58:38.942 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:58:38.942 CC lib/env_dpdk/pci_dpdk.o 00:58:38.942 CC lib/env_dpdk/pci_dpdk_2207.o 00:58:38.942 CC lib/env_dpdk/pci_dpdk_2211.o 00:58:38.942 LIB libspdk_jsonrpc.a 00:58:38.942 SO libspdk_jsonrpc.so.6.0 00:58:38.942 SYMLINK libspdk_jsonrpc.so 00:58:38.942 CC lib/rpc/rpc.o 00:58:38.942 LIB libspdk_env_dpdk.a 00:58:38.942 SO libspdk_env_dpdk.so.14.1 00:58:38.942 LIB libspdk_rpc.a 00:58:38.942 SO libspdk_rpc.so.6.0 00:58:38.942 SYMLINK libspdk_env_dpdk.so 00:58:38.942 SYMLINK libspdk_rpc.so 00:58:38.942 CC lib/notify/notify_rpc.o 00:58:38.942 CC lib/notify/notify.o 00:58:38.942 CC lib/keyring/keyring_rpc.o 00:58:38.942 CC lib/keyring/keyring.o 00:58:38.942 CC lib/trace/trace.o 00:58:38.942 CC lib/trace/trace_flags.o 00:58:38.942 CC lib/trace/trace_rpc.o 00:58:38.942 LIB libspdk_notify.a 00:58:39.199 SO libspdk_notify.so.6.0 00:58:39.199 LIB libspdk_keyring.a 00:58:39.199 SO libspdk_keyring.so.1.0 00:58:39.199 SYMLINK libspdk_notify.so 00:58:39.199 SYMLINK libspdk_keyring.so 00:58:39.199 LIB libspdk_trace.a 00:58:39.199 SO libspdk_trace.so.10.0 00:58:39.455 SYMLINK libspdk_trace.so 00:58:39.715 CC lib/sock/sock.o 00:58:39.715 CC lib/sock/sock_rpc.o 00:58:39.715 CC lib/thread/thread.o 00:58:39.715 CC lib/thread/iobuf.o 00:58:39.972 LIB libspdk_sock.a 00:58:40.233 SO libspdk_sock.so.10.0 00:58:40.233 SYMLINK libspdk_sock.so 00:58:40.496 CC lib/nvme/nvme_ctrlr_cmd.o 00:58:40.496 CC lib/nvme/nvme_ctrlr.o 00:58:40.496 CC lib/nvme/nvme_fabric.o 00:58:40.496 CC lib/nvme/nvme_ns.o 00:58:40.496 CC lib/nvme/nvme_ns_cmd.o 00:58:40.496 CC lib/nvme/nvme_pcie_common.o 00:58:40.496 CC lib/nvme/nvme_pcie.o 00:58:40.496 CC lib/nvme/nvme_qpair.o 00:58:40.496 CC lib/nvme/nvme.o 00:58:41.093 LIB libspdk_thread.a 00:58:41.352 SO libspdk_thread.so.10.1 00:58:41.352 CC lib/nvme/nvme_quirks.o 00:58:41.352 CC lib/nvme/nvme_transport.o 00:58:41.352 SYMLINK libspdk_thread.so 00:58:41.352 CC lib/nvme/nvme_discovery.o 00:58:41.611 CC lib/accel/accel.o 00:58:41.611 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:58:41.611 CC lib/init/json_config.o 00:58:41.611 CC lib/blob/blobstore.o 00:58:41.611 CC lib/init/subsystem.o 00:58:41.611 CC lib/virtio/virtio.o 00:58:41.869 CC lib/accel/accel_rpc.o 00:58:41.869 CC lib/init/subsystem_rpc.o 00:58:41.869 CC lib/accel/accel_sw.o 00:58:41.869 CC lib/blob/request.o 00:58:42.127 CC lib/virtio/virtio_vhost_user.o 00:58:42.127 CC lib/init/rpc.o 00:58:42.127 CC lib/virtio/virtio_vfio_user.o 00:58:42.127 CC lib/virtio/virtio_pci.o 00:58:42.127 CC lib/blob/zeroes.o 00:58:42.127 CC lib/blob/blob_bs_dev.o 00:58:42.127 LIB libspdk_init.a 00:58:42.127 SO libspdk_init.so.5.0 00:58:42.384 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:58:42.384 CC lib/nvme/nvme_tcp.o 00:58:42.384 CC lib/nvme/nvme_opal.o 00:58:42.384 SYMLINK libspdk_init.so 00:58:42.384 CC lib/nvme/nvme_io_msg.o 00:58:42.384 CC lib/nvme/nvme_poll_group.o 00:58:42.384 LIB libspdk_virtio.a 00:58:42.384 LIB libspdk_accel.a 00:58:42.384 SO libspdk_virtio.so.7.0 00:58:42.641 SO libspdk_accel.so.16.0 00:58:42.641 SYMLINK libspdk_virtio.so 00:58:42.641 CC lib/nvme/nvme_zns.o 00:58:42.641 SYMLINK libspdk_accel.so 00:58:42.641 CC lib/nvme/nvme_stubs.o 00:58:42.641 CC lib/event/app.o 00:58:42.899 CC lib/nvme/nvme_auth.o 00:58:42.899 CC lib/bdev/bdev.o 00:58:42.899 CC lib/bdev/bdev_rpc.o 00:58:42.899 CC lib/nvme/nvme_cuse.o 00:58:42.899 CC lib/nvme/nvme_rdma.o 00:58:43.156 CC lib/event/reactor.o 00:58:43.156 CC lib/bdev/bdev_zone.o 00:58:43.156 CC lib/event/log_rpc.o 00:58:43.156 CC lib/bdev/part.o 00:58:43.412 CC lib/bdev/scsi_nvme.o 00:58:43.412 CC lib/event/app_rpc.o 00:58:43.412 CC lib/event/scheduler_static.o 00:58:43.669 LIB libspdk_event.a 00:58:43.669 SO libspdk_event.so.14.0 00:58:43.926 SYMLINK libspdk_event.so 00:58:44.490 LIB libspdk_nvme.a 00:58:44.490 SO libspdk_nvme.so.13.1 00:58:44.490 LIB libspdk_blob.a 00:58:44.747 SO libspdk_blob.so.11.0 00:58:44.747 SYMLINK libspdk_blob.so 00:58:44.747 SYMLINK libspdk_nvme.so 00:58:45.005 CC lib/blobfs/tree.o 00:58:45.005 CC lib/blobfs/blobfs.o 00:58:45.005 CC lib/lvol/lvol.o 00:58:45.570 LIB libspdk_bdev.a 00:58:45.570 SO libspdk_bdev.so.16.0 00:58:45.570 SYMLINK libspdk_bdev.so 00:58:45.828 LIB libspdk_blobfs.a 00:58:45.828 SO libspdk_blobfs.so.10.0 00:58:45.828 CC lib/scsi/dev.o 00:58:45.828 CC lib/scsi/lun.o 00:58:45.828 CC lib/scsi/port.o 00:58:45.828 CC lib/scsi/scsi.o 00:58:45.828 CC lib/ftl/ftl_core.o 00:58:45.828 CC lib/ublk/ublk.o 00:58:45.828 CC lib/nvmf/ctrlr.o 00:58:45.828 CC lib/nbd/nbd.o 00:58:45.828 SYMLINK libspdk_blobfs.so 00:58:45.828 CC lib/nbd/nbd_rpc.o 00:58:45.828 LIB libspdk_lvol.a 00:58:45.828 SO libspdk_lvol.so.10.0 00:58:46.085 CC lib/scsi/scsi_bdev.o 00:58:46.085 SYMLINK libspdk_lvol.so 00:58:46.085 CC lib/scsi/scsi_pr.o 00:58:46.085 CC lib/scsi/scsi_rpc.o 00:58:46.085 CC lib/scsi/task.o 00:58:46.085 CC lib/nvmf/ctrlr_discovery.o 00:58:46.085 CC lib/nvmf/ctrlr_bdev.o 00:58:46.342 CC lib/nvmf/subsystem.o 00:58:46.342 CC lib/nvmf/nvmf.o 00:58:46.342 CC lib/ftl/ftl_init.o 00:58:46.342 LIB libspdk_nbd.a 00:58:46.342 SO libspdk_nbd.so.7.0 00:58:46.342 CC lib/nvmf/nvmf_rpc.o 00:58:46.342 SYMLINK libspdk_nbd.so 00:58:46.342 CC lib/ftl/ftl_layout.o 00:58:46.342 CC lib/ublk/ublk_rpc.o 00:58:46.599 LIB libspdk_scsi.a 00:58:46.599 CC lib/ftl/ftl_debug.o 00:58:46.599 SO libspdk_scsi.so.9.0 00:58:46.599 LIB libspdk_ublk.a 00:58:46.599 CC lib/ftl/ftl_io.o 00:58:46.599 SYMLINK libspdk_scsi.so 00:58:46.599 SO libspdk_ublk.so.3.0 00:58:46.857 CC lib/nvmf/transport.o 00:58:46.857 SYMLINK libspdk_ublk.so 00:58:46.857 CC lib/nvmf/tcp.o 00:58:46.857 CC lib/iscsi/conn.o 00:58:46.857 CC lib/nvmf/stubs.o 00:58:46.857 CC lib/ftl/ftl_sb.o 00:58:46.857 CC lib/vhost/vhost.o 00:58:47.114 CC lib/ftl/ftl_l2p.o 00:58:47.114 CC lib/ftl/ftl_l2p_flat.o 00:58:47.371 CC lib/ftl/ftl_nv_cache.o 00:58:47.371 CC lib/vhost/vhost_rpc.o 00:58:47.371 CC lib/vhost/vhost_scsi.o 00:58:47.371 CC lib/vhost/vhost_blk.o 00:58:47.371 CC lib/iscsi/init_grp.o 00:58:47.371 CC lib/nvmf/mdns_server.o 00:58:47.629 CC lib/iscsi/iscsi.o 00:58:47.629 CC lib/iscsi/md5.o 00:58:47.629 CC lib/iscsi/param.o 00:58:47.888 CC lib/iscsi/portal_grp.o 00:58:47.888 CC lib/iscsi/tgt_node.o 00:58:47.888 CC lib/vhost/rte_vhost_user.o 00:58:48.147 CC lib/iscsi/iscsi_subsystem.o 00:58:48.147 CC lib/iscsi/iscsi_rpc.o 00:58:48.147 CC lib/nvmf/rdma.o 00:58:48.147 CC lib/ftl/ftl_band.o 00:58:48.404 CC lib/nvmf/auth.o 00:58:48.404 CC lib/ftl/ftl_band_ops.o 00:58:48.404 CC lib/iscsi/task.o 00:58:48.404 CC lib/ftl/ftl_writer.o 00:58:48.404 CC lib/ftl/ftl_rq.o 00:58:48.404 CC lib/ftl/ftl_reloc.o 00:58:48.660 CC lib/ftl/ftl_l2p_cache.o 00:58:48.660 CC lib/ftl/ftl_p2l.o 00:58:48.660 CC lib/ftl/mngt/ftl_mngt.o 00:58:48.660 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:58:48.660 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:58:48.916 CC lib/ftl/mngt/ftl_mngt_startup.o 00:58:48.916 LIB libspdk_iscsi.a 00:58:48.916 CC lib/ftl/mngt/ftl_mngt_md.o 00:58:48.916 CC lib/ftl/mngt/ftl_mngt_misc.o 00:58:48.916 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:58:48.916 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:58:48.916 SO libspdk_iscsi.so.8.0 00:58:49.173 LIB libspdk_vhost.a 00:58:49.173 CC lib/ftl/mngt/ftl_mngt_band.o 00:58:49.173 SO libspdk_vhost.so.8.0 00:58:49.173 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:58:49.173 SYMLINK libspdk_iscsi.so 00:58:49.173 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:58:49.173 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:58:49.173 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:58:49.173 CC lib/ftl/utils/ftl_conf.o 00:58:49.173 SYMLINK libspdk_vhost.so 00:58:49.173 CC lib/ftl/utils/ftl_md.o 00:58:49.173 CC lib/ftl/utils/ftl_mempool.o 00:58:49.173 CC lib/ftl/utils/ftl_bitmap.o 00:58:49.431 CC lib/ftl/utils/ftl_property.o 00:58:49.431 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:58:49.431 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:58:49.431 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:58:49.431 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:58:49.431 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:58:49.689 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:58:49.689 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:58:49.689 CC lib/ftl/upgrade/ftl_sb_v3.o 00:58:49.689 CC lib/ftl/upgrade/ftl_sb_v5.o 00:58:49.689 CC lib/ftl/nvc/ftl_nvc_dev.o 00:58:49.689 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:58:49.689 CC lib/ftl/base/ftl_base_dev.o 00:58:49.689 CC lib/ftl/ftl_trace.o 00:58:49.689 CC lib/ftl/base/ftl_base_bdev.o 00:58:50.254 LIB libspdk_ftl.a 00:58:50.254 LIB libspdk_nvmf.a 00:58:50.254 SO libspdk_ftl.so.9.0 00:58:50.511 SO libspdk_nvmf.so.19.0 00:58:50.776 SYMLINK libspdk_nvmf.so 00:58:50.776 SYMLINK libspdk_ftl.so 00:58:51.047 CC module/env_dpdk/env_dpdk_rpc.o 00:58:51.047 CC module/accel/error/accel_error.o 00:58:51.047 CC module/accel/dsa/accel_dsa.o 00:58:51.047 CC module/blob/bdev/blob_bdev.o 00:58:51.047 CC module/scheduler/dynamic/scheduler_dynamic.o 00:58:51.047 CC module/accel/ioat/accel_ioat.o 00:58:51.047 CC module/accel/iaa/accel_iaa.o 00:58:51.047 CC module/keyring/file/keyring.o 00:58:51.047 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:58:51.304 CC module/sock/posix/posix.o 00:58:51.304 LIB libspdk_env_dpdk_rpc.a 00:58:51.304 SO libspdk_env_dpdk_rpc.so.6.0 00:58:51.304 LIB libspdk_scheduler_dpdk_governor.a 00:58:51.304 CC module/accel/error/accel_error_rpc.o 00:58:51.304 CC module/keyring/file/keyring_rpc.o 00:58:51.304 CC module/accel/ioat/accel_ioat_rpc.o 00:58:51.304 LIB libspdk_scheduler_dynamic.a 00:58:51.304 SYMLINK libspdk_env_dpdk_rpc.so 00:58:51.304 CC module/accel/iaa/accel_iaa_rpc.o 00:58:51.304 SO libspdk_scheduler_dpdk_governor.so.4.0 00:58:51.304 SO libspdk_scheduler_dynamic.so.4.0 00:58:51.304 CC module/accel/dsa/accel_dsa_rpc.o 00:58:51.304 LIB libspdk_blob_bdev.a 00:58:51.562 SYMLINK libspdk_scheduler_dpdk_governor.so 00:58:51.562 SO libspdk_blob_bdev.so.11.0 00:58:51.562 SYMLINK libspdk_scheduler_dynamic.so 00:58:51.562 LIB libspdk_accel_error.a 00:58:51.562 LIB libspdk_accel_ioat.a 00:58:51.562 SYMLINK libspdk_blob_bdev.so 00:58:51.562 LIB libspdk_accel_iaa.a 00:58:51.562 LIB libspdk_keyring_file.a 00:58:51.562 SO libspdk_accel_ioat.so.6.0 00:58:51.562 SO libspdk_accel_error.so.2.0 00:58:51.562 SO libspdk_accel_iaa.so.3.0 00:58:51.562 SO libspdk_keyring_file.so.1.0 00:58:51.562 CC module/scheduler/gscheduler/gscheduler.o 00:58:51.562 LIB libspdk_accel_dsa.a 00:58:51.562 SO libspdk_accel_dsa.so.5.0 00:58:51.562 SYMLINK libspdk_accel_error.so 00:58:51.562 SYMLINK libspdk_accel_iaa.so 00:58:51.562 SYMLINK libspdk_keyring_file.so 00:58:51.562 SYMLINK libspdk_accel_ioat.so 00:58:51.562 CC module/keyring/linux/keyring.o 00:58:51.562 SYMLINK libspdk_accel_dsa.so 00:58:51.819 LIB libspdk_scheduler_gscheduler.a 00:58:51.819 SO libspdk_scheduler_gscheduler.so.4.0 00:58:51.819 CC module/bdev/delay/vbdev_delay.o 00:58:51.819 CC module/keyring/linux/keyring_rpc.o 00:58:51.819 CC module/bdev/error/vbdev_error.o 00:58:51.819 CC module/blobfs/bdev/blobfs_bdev.o 00:58:51.819 CC module/bdev/malloc/bdev_malloc.o 00:58:51.819 CC module/bdev/gpt/gpt.o 00:58:51.819 SYMLINK libspdk_scheduler_gscheduler.so 00:58:51.819 CC module/bdev/gpt/vbdev_gpt.o 00:58:51.819 CC module/bdev/lvol/vbdev_lvol.o 00:58:51.819 CC module/bdev/null/bdev_null.o 00:58:52.076 LIB libspdk_sock_posix.a 00:58:52.076 LIB libspdk_keyring_linux.a 00:58:52.076 SO libspdk_keyring_linux.so.1.0 00:58:52.076 SO libspdk_sock_posix.so.6.0 00:58:52.076 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:58:52.076 SYMLINK libspdk_keyring_linux.so 00:58:52.076 SYMLINK libspdk_sock_posix.so 00:58:52.076 CC module/bdev/error/vbdev_error_rpc.o 00:58:52.076 LIB libspdk_bdev_gpt.a 00:58:52.076 CC module/bdev/null/bdev_null_rpc.o 00:58:52.076 SO libspdk_bdev_gpt.so.6.0 00:58:52.333 LIB libspdk_blobfs_bdev.a 00:58:52.333 CC module/bdev/delay/vbdev_delay_rpc.o 00:58:52.333 CC module/bdev/malloc/bdev_malloc_rpc.o 00:58:52.333 CC module/bdev/nvme/bdev_nvme.o 00:58:52.333 SO libspdk_blobfs_bdev.so.6.0 00:58:52.333 CC module/bdev/passthru/vbdev_passthru.o 00:58:52.333 SYMLINK libspdk_bdev_gpt.so 00:58:52.333 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:58:52.333 LIB libspdk_bdev_error.a 00:58:52.333 CC module/bdev/raid/bdev_raid.o 00:58:52.333 SYMLINK libspdk_blobfs_bdev.so 00:58:52.333 LIB libspdk_bdev_null.a 00:58:52.333 SO libspdk_bdev_error.so.6.0 00:58:52.333 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:58:52.333 SO libspdk_bdev_null.so.6.0 00:58:52.333 LIB libspdk_bdev_malloc.a 00:58:52.333 LIB libspdk_bdev_delay.a 00:58:52.333 SO libspdk_bdev_malloc.so.6.0 00:58:52.333 SYMLINK libspdk_bdev_error.so 00:58:52.333 SYMLINK libspdk_bdev_null.so 00:58:52.591 SO libspdk_bdev_delay.so.6.0 00:58:52.591 CC module/bdev/split/vbdev_split.o 00:58:52.591 SYMLINK libspdk_bdev_malloc.so 00:58:52.591 SYMLINK libspdk_bdev_delay.so 00:58:52.591 CC module/bdev/split/vbdev_split_rpc.o 00:58:52.591 LIB libspdk_bdev_passthru.a 00:58:52.591 SO libspdk_bdev_passthru.so.6.0 00:58:52.591 CC module/bdev/aio/bdev_aio.o 00:58:52.591 CC module/bdev/ftl/bdev_ftl.o 00:58:52.591 CC module/bdev/zone_block/vbdev_zone_block.o 00:58:52.591 SYMLINK libspdk_bdev_passthru.so 00:58:52.848 CC module/bdev/iscsi/bdev_iscsi.o 00:58:52.848 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:58:52.848 LIB libspdk_bdev_lvol.a 00:58:52.848 LIB libspdk_bdev_split.a 00:58:52.848 SO libspdk_bdev_lvol.so.6.0 00:58:52.848 SO libspdk_bdev_split.so.6.0 00:58:52.848 SYMLINK libspdk_bdev_lvol.so 00:58:52.848 CC module/bdev/virtio/bdev_virtio_scsi.o 00:58:52.848 CC module/bdev/virtio/bdev_virtio_blk.o 00:58:52.848 SYMLINK libspdk_bdev_split.so 00:58:52.848 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:58:52.848 CC module/bdev/nvme/bdev_nvme_rpc.o 00:58:52.848 CC module/bdev/ftl/bdev_ftl_rpc.o 00:58:53.105 CC module/bdev/aio/bdev_aio_rpc.o 00:58:53.105 LIB libspdk_bdev_zone_block.a 00:58:53.105 SO libspdk_bdev_zone_block.so.6.0 00:58:53.105 CC module/bdev/virtio/bdev_virtio_rpc.o 00:58:53.105 SYMLINK libspdk_bdev_zone_block.so 00:58:53.105 CC module/bdev/raid/bdev_raid_rpc.o 00:58:53.105 LIB libspdk_bdev_iscsi.a 00:58:53.105 SO libspdk_bdev_iscsi.so.6.0 00:58:53.105 LIB libspdk_bdev_aio.a 00:58:53.105 CC module/bdev/raid/bdev_raid_sb.o 00:58:53.105 LIB libspdk_bdev_ftl.a 00:58:53.105 SO libspdk_bdev_aio.so.6.0 00:58:53.105 SYMLINK libspdk_bdev_iscsi.so 00:58:53.363 CC module/bdev/raid/raid0.o 00:58:53.363 SO libspdk_bdev_ftl.so.6.0 00:58:53.363 CC module/bdev/raid/raid1.o 00:58:53.363 CC module/bdev/nvme/nvme_rpc.o 00:58:53.363 SYMLINK libspdk_bdev_aio.so 00:58:53.363 CC module/bdev/nvme/bdev_mdns_client.o 00:58:53.363 SYMLINK libspdk_bdev_ftl.so 00:58:53.363 CC module/bdev/nvme/vbdev_opal.o 00:58:53.363 CC module/bdev/raid/concat.o 00:58:53.363 CC module/bdev/nvme/vbdev_opal_rpc.o 00:58:53.363 LIB libspdk_bdev_virtio.a 00:58:53.620 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:58:53.620 SO libspdk_bdev_virtio.so.6.0 00:58:53.620 SYMLINK libspdk_bdev_virtio.so 00:58:53.620 LIB libspdk_bdev_raid.a 00:58:53.620 SO libspdk_bdev_raid.so.6.0 00:58:53.878 SYMLINK libspdk_bdev_raid.so 00:58:54.444 LIB libspdk_bdev_nvme.a 00:58:54.702 SO libspdk_bdev_nvme.so.7.0 00:58:54.702 SYMLINK libspdk_bdev_nvme.so 00:58:55.267 CC module/event/subsystems/vmd/vmd.o 00:58:55.267 CC module/event/subsystems/scheduler/scheduler.o 00:58:55.267 CC module/event/subsystems/vmd/vmd_rpc.o 00:58:55.267 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:58:55.267 CC module/event/subsystems/iobuf/iobuf.o 00:58:55.267 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:58:55.267 CC module/event/subsystems/sock/sock.o 00:58:55.267 CC module/event/subsystems/keyring/keyring.o 00:58:55.267 LIB libspdk_event_scheduler.a 00:58:55.267 LIB libspdk_event_keyring.a 00:58:55.267 LIB libspdk_event_vmd.a 00:58:55.267 LIB libspdk_event_vhost_blk.a 00:58:55.268 LIB libspdk_event_sock.a 00:58:55.268 SO libspdk_event_scheduler.so.4.0 00:58:55.268 SO libspdk_event_keyring.so.1.0 00:58:55.268 LIB libspdk_event_iobuf.a 00:58:55.268 SO libspdk_event_sock.so.5.0 00:58:55.268 SO libspdk_event_vhost_blk.so.3.0 00:58:55.268 SO libspdk_event_vmd.so.6.0 00:58:55.268 SO libspdk_event_iobuf.so.3.0 00:58:55.526 SYMLINK libspdk_event_scheduler.so 00:58:55.526 SYMLINK libspdk_event_keyring.so 00:58:55.526 SYMLINK libspdk_event_sock.so 00:58:55.526 SYMLINK libspdk_event_vmd.so 00:58:55.526 SYMLINK libspdk_event_vhost_blk.so 00:58:55.526 SYMLINK libspdk_event_iobuf.so 00:58:55.783 CC module/event/subsystems/accel/accel.o 00:58:55.783 LIB libspdk_event_accel.a 00:58:56.041 SO libspdk_event_accel.so.6.0 00:58:56.041 SYMLINK libspdk_event_accel.so 00:58:56.298 CC module/event/subsystems/bdev/bdev.o 00:58:56.555 LIB libspdk_event_bdev.a 00:58:56.555 SO libspdk_event_bdev.so.6.0 00:58:56.555 SYMLINK libspdk_event_bdev.so 00:58:56.811 CC module/event/subsystems/nbd/nbd.o 00:58:56.811 CC module/event/subsystems/ublk/ublk.o 00:58:56.811 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:58:56.811 CC module/event/subsystems/scsi/scsi.o 00:58:56.811 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:58:57.068 LIB libspdk_event_ublk.a 00:58:57.068 LIB libspdk_event_scsi.a 00:58:57.068 LIB libspdk_event_nbd.a 00:58:57.068 SO libspdk_event_ublk.so.3.0 00:58:57.068 SO libspdk_event_scsi.so.6.0 00:58:57.068 SO libspdk_event_nbd.so.6.0 00:58:57.068 SYMLINK libspdk_event_ublk.so 00:58:57.068 SYMLINK libspdk_event_scsi.so 00:58:57.068 LIB libspdk_event_nvmf.a 00:58:57.068 SYMLINK libspdk_event_nbd.so 00:58:57.068 SO libspdk_event_nvmf.so.6.0 00:58:57.325 SYMLINK libspdk_event_nvmf.so 00:58:57.325 CC module/event/subsystems/iscsi/iscsi.o 00:58:57.325 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:58:57.582 LIB libspdk_event_vhost_scsi.a 00:58:57.582 LIB libspdk_event_iscsi.a 00:58:57.582 SO libspdk_event_vhost_scsi.so.3.0 00:58:57.582 SO libspdk_event_iscsi.so.6.0 00:58:57.582 SYMLINK libspdk_event_vhost_scsi.so 00:58:57.582 SYMLINK libspdk_event_iscsi.so 00:58:57.840 SO libspdk.so.6.0 00:58:57.840 SYMLINK libspdk.so 00:58:58.096 CXX app/trace/trace.o 00:58:58.096 CC app/trace_record/trace_record.o 00:58:58.096 CC app/spdk_lspci/spdk_lspci.o 00:58:58.096 CC app/iscsi_tgt/iscsi_tgt.o 00:58:58.096 CC app/nvmf_tgt/nvmf_main.o 00:58:58.096 CC app/spdk_tgt/spdk_tgt.o 00:58:58.096 CC test/thread/poller_perf/poller_perf.o 00:58:58.096 CC examples/util/zipf/zipf.o 00:58:58.096 CC test/dma/test_dma/test_dma.o 00:58:58.352 CC test/app/bdev_svc/bdev_svc.o 00:58:58.352 LINK spdk_lspci 00:58:58.352 LINK nvmf_tgt 00:58:58.352 LINK iscsi_tgt 00:58:58.352 LINK poller_perf 00:58:58.352 LINK zipf 00:58:58.352 LINK spdk_trace_record 00:58:58.352 LINK spdk_tgt 00:58:58.622 LINK bdev_svc 00:58:58.622 LINK spdk_trace 00:58:58.622 CC app/spdk_nvme_perf/perf.o 00:58:58.622 LINK test_dma 00:58:58.622 CC app/spdk_nvme_identify/identify.o 00:58:58.622 CC app/spdk_nvme_discover/discovery_aer.o 00:58:58.622 CC app/spdk_top/spdk_top.o 00:58:58.879 CC examples/ioat/perf/perf.o 00:58:58.879 CC app/spdk_dd/spdk_dd.o 00:58:58.879 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:58:58.879 CC app/fio/nvme/fio_plugin.o 00:58:58.879 LINK spdk_nvme_discover 00:58:58.879 CC examples/vmd/lsvmd/lsvmd.o 00:58:59.136 LINK ioat_perf 00:58:59.136 CC examples/idxd/perf/perf.o 00:58:59.136 LINK lsvmd 00:58:59.136 LINK spdk_dd 00:58:59.136 CC examples/ioat/verify/verify.o 00:58:59.393 LINK nvme_fuzz 00:58:59.393 CC examples/vmd/led/led.o 00:58:59.393 LINK idxd_perf 00:58:59.393 LINK spdk_nvme_identify 00:58:59.393 LINK verify 00:58:59.393 LINK spdk_nvme 00:58:59.393 LINK spdk_nvme_perf 00:58:59.393 CC examples/interrupt_tgt/interrupt_tgt.o 00:58:59.651 LINK led 00:58:59.651 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:58:59.651 LINK spdk_top 00:58:59.651 LINK interrupt_tgt 00:58:59.651 CC app/fio/bdev/fio_plugin.o 00:58:59.651 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:58:59.651 CC test/app/histogram_perf/histogram_perf.o 00:58:59.909 CC app/vhost/vhost.o 00:58:59.909 CC examples/thread/thread/thread_ex.o 00:58:59.909 TEST_HEADER include/spdk/accel.h 00:58:59.909 TEST_HEADER include/spdk/accel_module.h 00:58:59.909 TEST_HEADER include/spdk/assert.h 00:58:59.909 TEST_HEADER include/spdk/barrier.h 00:58:59.909 TEST_HEADER include/spdk/base64.h 00:58:59.909 TEST_HEADER include/spdk/bdev.h 00:58:59.909 TEST_HEADER include/spdk/bdev_module.h 00:58:59.909 TEST_HEADER include/spdk/bdev_zone.h 00:58:59.909 TEST_HEADER include/spdk/bit_array.h 00:58:59.909 TEST_HEADER include/spdk/bit_pool.h 00:58:59.909 TEST_HEADER include/spdk/blob_bdev.h 00:58:59.909 TEST_HEADER include/spdk/blobfs_bdev.h 00:58:59.909 TEST_HEADER include/spdk/blobfs.h 00:58:59.909 TEST_HEADER include/spdk/blob.h 00:58:59.909 TEST_HEADER include/spdk/conf.h 00:58:59.909 TEST_HEADER include/spdk/config.h 00:58:59.909 TEST_HEADER include/spdk/cpuset.h 00:58:59.909 TEST_HEADER include/spdk/crc16.h 00:58:59.909 TEST_HEADER include/spdk/crc32.h 00:58:59.909 TEST_HEADER include/spdk/crc64.h 00:58:59.909 TEST_HEADER include/spdk/dif.h 00:58:59.909 TEST_HEADER include/spdk/dma.h 00:58:59.909 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:58:59.909 TEST_HEADER include/spdk/endian.h 00:58:59.909 TEST_HEADER include/spdk/env_dpdk.h 00:58:59.909 TEST_HEADER include/spdk/env.h 00:58:59.909 TEST_HEADER include/spdk/event.h 00:58:59.909 CC examples/sock/hello_world/hello_sock.o 00:58:59.909 TEST_HEADER include/spdk/fd_group.h 00:58:59.909 TEST_HEADER include/spdk/fd.h 00:58:59.909 TEST_HEADER include/spdk/file.h 00:58:59.909 TEST_HEADER include/spdk/ftl.h 00:58:59.909 TEST_HEADER include/spdk/gpt_spec.h 00:58:59.909 TEST_HEADER include/spdk/hexlify.h 00:58:59.909 TEST_HEADER include/spdk/histogram_data.h 00:58:59.909 TEST_HEADER include/spdk/idxd.h 00:58:59.909 TEST_HEADER include/spdk/idxd_spec.h 00:58:59.909 TEST_HEADER include/spdk/init.h 00:58:59.909 TEST_HEADER include/spdk/ioat.h 00:58:59.909 TEST_HEADER include/spdk/ioat_spec.h 00:58:59.909 TEST_HEADER include/spdk/iscsi_spec.h 00:58:59.909 TEST_HEADER include/spdk/json.h 00:58:59.909 TEST_HEADER include/spdk/jsonrpc.h 00:58:59.909 TEST_HEADER include/spdk/keyring.h 00:58:59.909 TEST_HEADER include/spdk/keyring_module.h 00:58:59.909 TEST_HEADER include/spdk/likely.h 00:58:59.909 TEST_HEADER include/spdk/log.h 00:58:59.909 TEST_HEADER include/spdk/lvol.h 00:58:59.909 TEST_HEADER include/spdk/memory.h 00:58:59.909 LINK histogram_perf 00:58:59.909 TEST_HEADER include/spdk/mmio.h 00:58:59.909 TEST_HEADER include/spdk/nbd.h 00:58:59.909 TEST_HEADER include/spdk/net.h 00:58:59.909 TEST_HEADER include/spdk/notify.h 00:58:59.909 TEST_HEADER include/spdk/nvme.h 00:58:59.909 TEST_HEADER include/spdk/nvme_intel.h 00:58:59.909 TEST_HEADER include/spdk/nvme_ocssd.h 00:58:59.909 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:58:59.909 TEST_HEADER include/spdk/nvme_spec.h 00:58:59.909 TEST_HEADER include/spdk/nvme_zns.h 00:59:00.167 LINK vhost 00:59:00.167 TEST_HEADER include/spdk/nvmf_cmd.h 00:59:00.167 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:59:00.167 TEST_HEADER include/spdk/nvmf.h 00:59:00.167 TEST_HEADER include/spdk/nvmf_spec.h 00:59:00.167 TEST_HEADER include/spdk/nvmf_transport.h 00:59:00.167 TEST_HEADER include/spdk/opal.h 00:59:00.167 TEST_HEADER include/spdk/opal_spec.h 00:59:00.167 TEST_HEADER include/spdk/pci_ids.h 00:59:00.167 TEST_HEADER include/spdk/pipe.h 00:59:00.167 TEST_HEADER include/spdk/queue.h 00:59:00.167 TEST_HEADER include/spdk/reduce.h 00:59:00.167 TEST_HEADER include/spdk/rpc.h 00:59:00.167 TEST_HEADER include/spdk/scheduler.h 00:59:00.167 TEST_HEADER include/spdk/scsi.h 00:59:00.167 TEST_HEADER include/spdk/scsi_spec.h 00:59:00.167 TEST_HEADER include/spdk/sock.h 00:59:00.167 TEST_HEADER include/spdk/stdinc.h 00:59:00.167 TEST_HEADER include/spdk/string.h 00:59:00.167 TEST_HEADER include/spdk/thread.h 00:59:00.167 TEST_HEADER include/spdk/trace.h 00:59:00.167 TEST_HEADER include/spdk/trace_parser.h 00:59:00.167 TEST_HEADER include/spdk/tree.h 00:59:00.167 TEST_HEADER include/spdk/ublk.h 00:59:00.167 TEST_HEADER include/spdk/util.h 00:59:00.167 TEST_HEADER include/spdk/uuid.h 00:59:00.167 LINK thread 00:59:00.167 TEST_HEADER include/spdk/version.h 00:59:00.167 TEST_HEADER include/spdk/vfio_user_pci.h 00:59:00.167 TEST_HEADER include/spdk/vfio_user_spec.h 00:59:00.167 TEST_HEADER include/spdk/vhost.h 00:59:00.167 TEST_HEADER include/spdk/vmd.h 00:59:00.167 TEST_HEADER include/spdk/xor.h 00:59:00.167 TEST_HEADER include/spdk/zipf.h 00:59:00.167 CXX test/cpp_headers/accel.o 00:59:00.167 LINK hello_sock 00:59:00.167 LINK spdk_bdev 00:59:00.424 CXX test/cpp_headers/accel_module.o 00:59:00.424 CC test/env/mem_callbacks/mem_callbacks.o 00:59:00.424 LINK vhost_fuzz 00:59:00.424 CXX test/cpp_headers/assert.o 00:59:00.681 CC test/env/vtophys/vtophys.o 00:59:00.681 CC test/app/jsoncat/jsoncat.o 00:59:00.681 CC test/app/stub/stub.o 00:59:00.681 CC examples/accel/perf/accel_perf.o 00:59:00.681 CXX test/cpp_headers/barrier.o 00:59:00.681 LINK vtophys 00:59:00.681 LINK jsoncat 00:59:00.681 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:59:00.681 CC test/event/event_perf/event_perf.o 00:59:00.938 LINK stub 00:59:00.938 CXX test/cpp_headers/base64.o 00:59:00.938 LINK event_perf 00:59:00.938 LINK env_dpdk_post_init 00:59:00.938 CC test/env/memory/memory_ut.o 00:59:00.938 CXX test/cpp_headers/bdev.o 00:59:01.195 LINK mem_callbacks 00:59:01.195 CC examples/blob/hello_world/hello_blob.o 00:59:01.195 CC examples/blob/cli/blobcli.o 00:59:01.195 CC test/event/reactor/reactor.o 00:59:01.195 CC test/env/pci/pci_ut.o 00:59:01.195 LINK accel_perf 00:59:01.195 CXX test/cpp_headers/bdev_module.o 00:59:01.454 LINK reactor 00:59:01.454 LINK iscsi_fuzz 00:59:01.454 LINK hello_blob 00:59:01.454 CC examples/nvme/hello_world/hello_world.o 00:59:01.454 CXX test/cpp_headers/bdev_zone.o 00:59:01.454 CC examples/nvme/reconnect/reconnect.o 00:59:01.711 LINK pci_ut 00:59:01.711 CXX test/cpp_headers/bit_array.o 00:59:01.711 CC test/event/reactor_perf/reactor_perf.o 00:59:01.711 LINK hello_world 00:59:01.711 LINK blobcli 00:59:01.711 CC test/rpc_client/rpc_client_test.o 00:59:01.969 CC test/nvme/aer/aer.o 00:59:01.969 CXX test/cpp_headers/bit_pool.o 00:59:01.969 LINK reactor_perf 00:59:01.969 CXX test/cpp_headers/blob_bdev.o 00:59:01.969 LINK rpc_client_test 00:59:02.227 LINK reconnect 00:59:02.227 LINK memory_ut 00:59:02.227 LINK aer 00:59:02.227 CC test/event/app_repeat/app_repeat.o 00:59:02.227 CXX test/cpp_headers/blobfs_bdev.o 00:59:02.227 CXX test/cpp_headers/blobfs.o 00:59:02.485 CC test/accel/dif/dif.o 00:59:02.485 CC test/blobfs/mkfs/mkfs.o 00:59:02.485 LINK app_repeat 00:59:02.485 CC examples/nvme/nvme_manage/nvme_manage.o 00:59:02.485 CXX test/cpp_headers/blob.o 00:59:02.485 CC test/lvol/esnap/esnap.o 00:59:02.485 CC test/nvme/reset/reset.o 00:59:02.742 CC test/nvme/sgl/sgl.o 00:59:02.743 CC test/nvme/e2edp/nvme_dp.o 00:59:02.743 LINK mkfs 00:59:02.743 CXX test/cpp_headers/conf.o 00:59:02.743 LINK reset 00:59:03.000 CC test/event/scheduler/scheduler.o 00:59:03.000 LINK dif 00:59:03.000 CXX test/cpp_headers/config.o 00:59:03.000 LINK sgl 00:59:03.000 LINK nvme_dp 00:59:03.000 CXX test/cpp_headers/cpuset.o 00:59:03.000 LINK nvme_manage 00:59:03.258 CC test/nvme/overhead/overhead.o 00:59:03.258 LINK scheduler 00:59:03.258 CC test/nvme/err_injection/err_injection.o 00:59:03.258 CXX test/cpp_headers/crc16.o 00:59:03.258 CC test/nvme/startup/startup.o 00:59:03.515 CC test/nvme/reserve/reserve.o 00:59:03.515 CC examples/nvme/arbitration/arbitration.o 00:59:03.515 CXX test/cpp_headers/crc32.o 00:59:03.515 CXX test/cpp_headers/crc64.o 00:59:03.515 LINK err_injection 00:59:03.773 LINK startup 00:59:03.773 LINK reserve 00:59:03.773 CC examples/bdev/hello_world/hello_bdev.o 00:59:03.773 LINK overhead 00:59:03.773 CXX test/cpp_headers/dif.o 00:59:04.030 CC test/nvme/simple_copy/simple_copy.o 00:59:04.030 LINK arbitration 00:59:04.030 LINK hello_bdev 00:59:04.030 CC test/nvme/connect_stress/connect_stress.o 00:59:04.030 CXX test/cpp_headers/dma.o 00:59:04.286 CC test/bdev/bdevio/bdevio.o 00:59:04.286 CC test/nvme/boot_partition/boot_partition.o 00:59:04.286 CC test/nvme/compliance/nvme_compliance.o 00:59:04.286 LINK simple_copy 00:59:04.286 LINK connect_stress 00:59:04.286 CXX test/cpp_headers/endian.o 00:59:04.542 CC examples/nvme/hotplug/hotplug.o 00:59:04.542 LINK boot_partition 00:59:04.542 CC examples/bdev/bdevperf/bdevperf.o 00:59:04.542 CXX test/cpp_headers/env_dpdk.o 00:59:04.542 LINK nvme_compliance 00:59:04.799 LINK bdevio 00:59:04.799 CC test/nvme/fused_ordering/fused_ordering.o 00:59:04.799 LINK hotplug 00:59:04.799 CC test/nvme/doorbell_aers/doorbell_aers.o 00:59:04.799 CC test/nvme/fdp/fdp.o 00:59:04.799 CXX test/cpp_headers/env.o 00:59:05.055 LINK doorbell_aers 00:59:05.055 LINK fused_ordering 00:59:05.055 CC examples/nvme/cmb_copy/cmb_copy.o 00:59:05.055 CC examples/nvme/abort/abort.o 00:59:05.055 CXX test/cpp_headers/event.o 00:59:05.055 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:59:05.055 CXX test/cpp_headers/fd_group.o 00:59:05.055 LINK cmb_copy 00:59:05.055 LINK fdp 00:59:05.311 CC test/nvme/cuse/cuse.o 00:59:05.311 CXX test/cpp_headers/fd.o 00:59:05.311 LINK pmr_persistence 00:59:05.311 CXX test/cpp_headers/file.o 00:59:05.311 CXX test/cpp_headers/ftl.o 00:59:05.311 CXX test/cpp_headers/gpt_spec.o 00:59:05.311 LINK bdevperf 00:59:05.311 CXX test/cpp_headers/hexlify.o 00:59:05.568 LINK abort 00:59:05.568 CXX test/cpp_headers/histogram_data.o 00:59:05.568 CXX test/cpp_headers/idxd.o 00:59:05.568 CXX test/cpp_headers/idxd_spec.o 00:59:05.568 CXX test/cpp_headers/init.o 00:59:05.568 CXX test/cpp_headers/ioat.o 00:59:05.568 CXX test/cpp_headers/ioat_spec.o 00:59:05.568 CXX test/cpp_headers/iscsi_spec.o 00:59:05.568 CXX test/cpp_headers/json.o 00:59:05.568 CXX test/cpp_headers/jsonrpc.o 00:59:05.825 CXX test/cpp_headers/keyring.o 00:59:05.825 CXX test/cpp_headers/keyring_module.o 00:59:05.825 CXX test/cpp_headers/likely.o 00:59:05.825 CXX test/cpp_headers/log.o 00:59:05.825 CXX test/cpp_headers/lvol.o 00:59:05.825 CXX test/cpp_headers/memory.o 00:59:05.825 CXX test/cpp_headers/mmio.o 00:59:06.082 CXX test/cpp_headers/nbd.o 00:59:06.082 CC examples/nvmf/nvmf/nvmf.o 00:59:06.082 CXX test/cpp_headers/net.o 00:59:06.082 CXX test/cpp_headers/notify.o 00:59:06.082 CXX test/cpp_headers/nvme.o 00:59:06.082 CXX test/cpp_headers/nvme_intel.o 00:59:06.082 CXX test/cpp_headers/nvme_ocssd.o 00:59:06.082 CXX test/cpp_headers/nvme_ocssd_spec.o 00:59:06.340 CXX test/cpp_headers/nvme_spec.o 00:59:06.340 CXX test/cpp_headers/nvme_zns.o 00:59:06.340 CXX test/cpp_headers/nvmf_cmd.o 00:59:06.340 CXX test/cpp_headers/nvmf_fc_spec.o 00:59:06.340 LINK nvmf 00:59:06.340 CXX test/cpp_headers/nvmf.o 00:59:06.340 CXX test/cpp_headers/nvmf_spec.o 00:59:06.340 CXX test/cpp_headers/nvmf_transport.o 00:59:06.340 CXX test/cpp_headers/opal.o 00:59:06.598 CXX test/cpp_headers/opal_spec.o 00:59:06.598 CXX test/cpp_headers/pci_ids.o 00:59:06.598 CXX test/cpp_headers/pipe.o 00:59:06.598 CXX test/cpp_headers/queue.o 00:59:06.598 CXX test/cpp_headers/reduce.o 00:59:06.598 LINK cuse 00:59:06.598 CXX test/cpp_headers/rpc.o 00:59:06.598 CXX test/cpp_headers/scheduler.o 00:59:06.598 CXX test/cpp_headers/scsi.o 00:59:06.598 CXX test/cpp_headers/scsi_spec.o 00:59:06.598 CXX test/cpp_headers/sock.o 00:59:06.855 CXX test/cpp_headers/stdinc.o 00:59:06.855 CXX test/cpp_headers/string.o 00:59:06.855 CXX test/cpp_headers/thread.o 00:59:06.855 CXX test/cpp_headers/trace.o 00:59:06.855 CXX test/cpp_headers/trace_parser.o 00:59:06.855 CXX test/cpp_headers/tree.o 00:59:06.855 CXX test/cpp_headers/ublk.o 00:59:06.855 CXX test/cpp_headers/util.o 00:59:06.855 CXX test/cpp_headers/uuid.o 00:59:06.855 CXX test/cpp_headers/version.o 00:59:06.855 CXX test/cpp_headers/vfio_user_pci.o 00:59:06.855 CXX test/cpp_headers/vfio_user_spec.o 00:59:06.855 CXX test/cpp_headers/vhost.o 00:59:07.112 CXX test/cpp_headers/vmd.o 00:59:07.112 CXX test/cpp_headers/xor.o 00:59:07.112 CXX test/cpp_headers/zipf.o 00:59:08.483 LINK esnap 00:59:09.050 00:59:09.050 real 0m58.265s 00:59:09.050 user 5m27.310s 00:59:09.050 sys 1m13.814s 00:59:09.050 10:56:13 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:59:09.050 10:56:13 make -- common/autotest_common.sh@10 -- $ set +x 00:59:09.050 ************************************ 00:59:09.050 END TEST make 00:59:09.050 ************************************ 00:59:09.050 10:56:14 -- common/autotest_common.sh@1142 -- $ return 0 00:59:09.050 10:56:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:59:09.050 10:56:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:59:09.050 10:56:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:59:09.050 10:56:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:59:09.050 10:56:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:59:09.050 10:56:14 -- pm/common@44 -- $ pid=6039 00:59:09.050 10:56:14 -- pm/common@50 -- $ kill -TERM 6039 00:59:09.050 10:56:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:59:09.050 10:56:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:59:09.050 10:56:14 -- pm/common@44 -- $ pid=6041 00:59:09.050 10:56:14 -- pm/common@50 -- $ kill -TERM 6041 00:59:09.050 10:56:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:09.050 10:56:14 -- nvmf/common.sh@7 -- # uname -s 00:59:09.050 10:56:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:09.050 10:56:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:09.050 10:56:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:09.050 10:56:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:09.050 10:56:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:09.050 10:56:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:09.050 10:56:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:09.050 10:56:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:09.050 10:56:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:09.050 10:56:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:09.050 10:56:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 00:59:09.050 10:56:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 00:59:09.050 10:56:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:09.050 10:56:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:09.050 10:56:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:59:09.050 10:56:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:09.050 10:56:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:09.050 10:56:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:09.050 10:56:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:09.050 10:56:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:09.050 10:56:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:09.050 10:56:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:09.050 10:56:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:09.050 10:56:14 -- paths/export.sh@5 -- # export PATH 00:59:09.050 10:56:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:09.050 10:56:14 -- nvmf/common.sh@47 -- # : 0 00:59:09.050 10:56:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:59:09.050 10:56:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:59:09.050 10:56:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:09.050 10:56:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:09.050 10:56:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:09.050 10:56:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:59:09.050 10:56:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:59:09.050 10:56:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:59:09.050 10:56:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:59:09.050 10:56:14 -- spdk/autotest.sh@32 -- # uname -s 00:59:09.050 10:56:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:59:09.050 10:56:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:59:09.050 10:56:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:59:09.050 10:56:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:59:09.050 10:56:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:59:09.050 10:56:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:59:09.050 10:56:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:59:09.050 10:56:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:59:09.050 10:56:14 -- spdk/autotest.sh@48 -- # udevadm_pid=67278 00:59:09.050 10:56:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:59:09.050 10:56:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:59:09.050 10:56:14 -- pm/common@17 -- # local monitor 00:59:09.050 10:56:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:59:09.050 10:56:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:59:09.050 10:56:14 -- pm/common@25 -- # sleep 1 00:59:09.050 10:56:14 -- pm/common@21 -- # date +%s 00:59:09.050 10:56:14 -- pm/common@21 -- # date +%s 00:59:09.050 10:56:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721645774 00:59:09.050 10:56:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721645774 00:59:09.050 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721645774_collect-vmstat.pm.log 00:59:09.050 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721645774_collect-cpu-load.pm.log 00:59:09.982 10:56:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:59:09.982 10:56:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:59:09.982 10:56:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:59:09.982 10:56:15 -- common/autotest_common.sh@10 -- # set +x 00:59:09.982 10:56:15 -- spdk/autotest.sh@59 -- # create_test_list 00:59:09.982 10:56:15 -- common/autotest_common.sh@746 -- # xtrace_disable 00:59:09.982 10:56:15 -- common/autotest_common.sh@10 -- # set +x 00:59:10.239 10:56:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:59:10.239 10:56:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:59:10.239 10:56:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:59:10.239 10:56:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:59:10.239 10:56:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:59:10.239 10:56:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:59:10.239 10:56:15 -- common/autotest_common.sh@1455 -- # uname 00:59:10.239 10:56:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:59:10.239 10:56:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:59:10.239 10:56:15 -- common/autotest_common.sh@1475 -- # uname 00:59:10.239 10:56:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:59:10.239 10:56:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:59:10.239 10:56:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:59:10.239 10:56:15 -- spdk/autotest.sh@72 -- # hash lcov 00:59:10.239 10:56:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:59:10.239 10:56:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:59:10.239 --rc lcov_branch_coverage=1 00:59:10.239 --rc lcov_function_coverage=1 00:59:10.239 --rc genhtml_branch_coverage=1 00:59:10.239 --rc genhtml_function_coverage=1 00:59:10.239 --rc genhtml_legend=1 00:59:10.239 --rc geninfo_all_blocks=1 00:59:10.239 ' 00:59:10.239 10:56:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:59:10.239 --rc lcov_branch_coverage=1 00:59:10.239 --rc lcov_function_coverage=1 00:59:10.239 --rc genhtml_branch_coverage=1 00:59:10.239 --rc genhtml_function_coverage=1 00:59:10.239 --rc genhtml_legend=1 00:59:10.239 --rc geninfo_all_blocks=1 00:59:10.239 ' 00:59:10.239 10:56:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:59:10.239 --rc lcov_branch_coverage=1 00:59:10.239 --rc lcov_function_coverage=1 00:59:10.239 --rc genhtml_branch_coverage=1 00:59:10.239 --rc genhtml_function_coverage=1 00:59:10.239 --rc genhtml_legend=1 00:59:10.239 --rc geninfo_all_blocks=1 00:59:10.239 --no-external' 00:59:10.239 10:56:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:59:10.239 --rc lcov_branch_coverage=1 00:59:10.239 --rc lcov_function_coverage=1 00:59:10.239 --rc genhtml_branch_coverage=1 00:59:10.239 --rc genhtml_function_coverage=1 00:59:10.239 --rc genhtml_legend=1 00:59:10.239 --rc geninfo_all_blocks=1 00:59:10.239 --no-external' 00:59:10.239 10:56:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:59:10.239 lcov: LCOV version 1.14 00:59:10.239 10:56:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:59:28.305 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:59:28.305 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:59:40.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:59:40.494 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:59:40.494 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:59:40.495 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:59:40.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:59:43.778 10:56:48 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:59:43.778 10:56:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:59:43.778 10:56:48 -- common/autotest_common.sh@10 -- # set +x 00:59:43.778 10:56:48 -- spdk/autotest.sh@91 -- # rm -f 00:59:43.778 10:56:48 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:44.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:44.345 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:59:44.345 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:59:44.345 10:56:49 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:59:44.345 10:56:49 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:59:44.345 10:56:49 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:59:44.345 10:56:49 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:59:44.345 10:56:49 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:44.345 10:56:49 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:59:44.345 10:56:49 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:59:44.345 10:56:49 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:59:44.345 10:56:49 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:44.345 10:56:49 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:44.345 10:56:49 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:59:44.345 10:56:49 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:59:44.345 10:56:49 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:59:44.345 10:56:49 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:44.345 10:56:49 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:44.345 10:56:49 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:59:44.345 10:56:49 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:59:44.345 10:56:49 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:59:44.345 10:56:49 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:44.345 10:56:49 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:44.345 10:56:49 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:59:44.345 10:56:49 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:59:44.345 10:56:49 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:59:44.345 10:56:49 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:44.345 10:56:49 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:59:44.345 10:56:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:59:44.345 10:56:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:59:44.345 10:56:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:59:44.345 10:56:49 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:59:44.345 10:56:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:59:44.603 No valid GPT data, bailing 00:59:44.603 10:56:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:59:44.603 10:56:49 -- scripts/common.sh@391 -- # pt= 00:59:44.603 10:56:49 -- scripts/common.sh@392 -- # return 1 00:59:44.603 10:56:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:59:44.604 1+0 records in 00:59:44.604 1+0 records out 00:59:44.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496706 s, 211 MB/s 00:59:44.604 10:56:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:59:44.604 10:56:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:59:44.604 10:56:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:59:44.604 10:56:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:59:44.604 10:56:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:59:44.604 No valid GPT data, bailing 00:59:44.604 10:56:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:59:44.604 10:56:49 -- scripts/common.sh@391 -- # pt= 00:59:44.604 10:56:49 -- scripts/common.sh@392 -- # return 1 00:59:44.604 10:56:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:59:44.604 1+0 records in 00:59:44.604 1+0 records out 00:59:44.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539101 s, 195 MB/s 00:59:44.604 10:56:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:59:44.604 10:56:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:59:44.604 10:56:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:59:44.604 10:56:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:59:44.604 10:56:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:59:44.604 No valid GPT data, bailing 00:59:44.604 10:56:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:59:44.604 10:56:49 -- scripts/common.sh@391 -- # pt= 00:59:44.604 10:56:49 -- scripts/common.sh@392 -- # return 1 00:59:44.604 10:56:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:59:44.604 1+0 records in 00:59:44.604 1+0 records out 00:59:44.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549516 s, 191 MB/s 00:59:44.604 10:56:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:59:44.604 10:56:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:59:44.604 10:56:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:59:44.604 10:56:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:59:44.604 10:56:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:59:44.862 No valid GPT data, bailing 00:59:44.862 10:56:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:59:44.862 10:56:49 -- scripts/common.sh@391 -- # pt= 00:59:44.862 10:56:49 -- scripts/common.sh@392 -- # return 1 00:59:44.862 10:56:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:59:44.862 1+0 records in 00:59:44.862 1+0 records out 00:59:44.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486018 s, 216 MB/s 00:59:44.862 10:56:49 -- spdk/autotest.sh@118 -- # sync 00:59:44.862 10:56:49 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:59:44.862 10:56:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:59:44.862 10:56:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:59:46.761 10:56:51 -- spdk/autotest.sh@124 -- # uname -s 00:59:46.761 10:56:51 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:59:46.761 10:56:51 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:59:46.761 10:56:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:46.761 10:56:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:46.761 10:56:51 -- common/autotest_common.sh@10 -- # set +x 00:59:46.761 ************************************ 00:59:46.761 START TEST setup.sh 00:59:46.761 ************************************ 00:59:46.761 10:56:51 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:59:46.761 * Looking for test storage... 00:59:46.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:59:46.761 10:56:51 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:59:46.761 10:56:51 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:59:46.761 10:56:51 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:59:46.761 10:56:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:46.761 10:56:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:46.761 10:56:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:59:46.761 ************************************ 00:59:46.761 START TEST acl 00:59:46.761 ************************************ 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:59:46.761 * Looking for test storage... 00:59:46.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:59:46.761 10:56:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:59:46.761 10:56:51 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:59:46.762 10:56:51 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:59:46.762 10:56:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:46.762 10:56:51 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:46.762 10:56:51 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:59:46.762 10:56:51 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:59:46.762 10:56:51 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:59:46.762 10:56:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:46.762 10:56:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:59:46.762 10:56:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:59:46.762 10:56:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:59:46.762 10:56:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:59:46.762 10:56:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:59:46.762 10:56:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:59:46.762 10:56:51 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:47.693 10:56:52 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:59:47.693 10:56:52 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:59:47.693 10:56:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:59:47.693 10:56:52 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:59:47.693 10:56:52 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:59:47.693 10:56:52 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:59:48.257 Hugepages 00:59:48.257 node hugesize free / total 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:59:48.257 00:59:48.257 Type BDF Vendor Device NUMA Driver Device Block devices 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:59:48.257 10:56:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:59:48.257 10:56:53 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:48.257 10:56:53 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:48.257 10:56:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:59:48.257 ************************************ 00:59:48.257 START TEST denied 00:59:48.257 ************************************ 00:59:48.257 10:56:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:59:48.257 10:56:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:59:48.257 10:56:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:59:48.257 10:56:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:59:48.257 10:56:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:59:48.257 10:56:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:59:49.189 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:59:49.189 10:56:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:49.755 00:59:49.755 real 0m1.403s 00:59:49.755 user 0m0.555s 00:59:49.755 sys 0m0.826s 00:59:49.755 10:56:54 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:49.755 10:56:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:59:49.755 ************************************ 00:59:49.755 END TEST denied 00:59:49.755 ************************************ 00:59:49.755 10:56:54 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:59:49.755 10:56:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:59:49.755 10:56:54 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:49.755 10:56:54 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:49.755 10:56:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:59:49.755 ************************************ 00:59:49.755 START TEST allowed 00:59:49.755 ************************************ 00:59:49.755 10:56:54 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:59:49.755 10:56:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:59:49.755 10:56:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:59:49.755 10:56:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:59:49.755 10:56:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:59:49.755 10:56:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:59:50.688 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:59:50.688 10:56:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:51.252 00:59:51.252 real 0m1.545s 00:59:51.252 user 0m0.689s 00:59:51.252 sys 0m0.837s 00:59:51.252 10:56:56 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:51.252 ************************************ 00:59:51.252 END TEST allowed 00:59:51.252 ************************************ 00:59:51.252 10:56:56 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:59:51.511 10:56:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:59:51.511 00:59:51.511 real 0m4.639s 00:59:51.511 user 0m2.070s 00:59:51.511 sys 0m2.537s 00:59:51.511 10:56:56 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:51.512 10:56:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:59:51.512 ************************************ 00:59:51.512 END TEST acl 00:59:51.512 ************************************ 00:59:51.512 10:56:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:59:51.512 10:56:56 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:59:51.512 10:56:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:51.512 10:56:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:51.512 10:56:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:59:51.512 ************************************ 00:59:51.512 START TEST hugepages 00:59:51.512 ************************************ 00:59:51.512 10:56:56 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:59:51.512 * Looking for test storage... 00:59:51.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4446056 kB' 'MemAvailable: 7400172 kB' 'Buffers: 2436 kB' 'Cached: 3154944 kB' 'SwapCached: 0 kB' 'Active: 477872 kB' 'Inactive: 2784664 kB' 'Active(anon): 115648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 106824 kB' 'Mapped: 48860 kB' 'Shmem: 10492 kB' 'KReclaimable: 88332 kB' 'Slab: 166952 kB' 'SReclaimable: 88332 kB' 'SUnreclaim: 78620 kB' 'KernelStack: 6604 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 339032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.512 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:59:51.513 10:56:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:59:51.513 10:56:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:51.513 10:56:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:51.513 10:56:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:59:51.513 ************************************ 00:59:51.513 START TEST default_setup 00:59:51.514 ************************************ 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:59:51.514 10:56:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:52.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:52.451 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:59:52.451 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6536924 kB' 'MemAvailable: 9490864 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494692 kB' 'Inactive: 2784672 kB' 'Active(anon): 132468 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123428 kB' 'Mapped: 49008 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166464 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78496 kB' 'KernelStack: 6544 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.451 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.452 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6536924 kB' 'MemAvailable: 9490868 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494716 kB' 'Inactive: 2784676 kB' 'Active(anon): 132492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123436 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166464 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78496 kB' 'KernelStack: 6512 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.453 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.454 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6536676 kB' 'MemAvailable: 9490620 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494556 kB' 'Inactive: 2784676 kB' 'Active(anon): 132332 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123528 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166464 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78496 kB' 'KernelStack: 6528 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.455 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:59:52.456 nr_hugepages=1024 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:59:52.456 resv_hugepages=0 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:59:52.456 surplus_hugepages=0 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:59:52.456 anon_hugepages=0 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.456 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6536676 kB' 'MemAvailable: 9490620 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494528 kB' 'Inactive: 2784676 kB' 'Active(anon): 132304 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123480 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166464 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78496 kB' 'KernelStack: 6512 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.457 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:59:52.458 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6536676 kB' 'MemUsed: 5705304 kB' 'SwapCached: 0 kB' 'Active: 494228 kB' 'Inactive: 2784676 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 3157368 kB' 'Mapped: 48824 kB' 'AnonPages: 123180 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87968 kB' 'Slab: 166464 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.718 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:59:52.719 node0=1024 expecting 1024 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:59:52.719 00:59:52.719 real 0m1.005s 00:59:52.719 user 0m0.448s 00:59:52.719 sys 0m0.489s 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:52.719 10:56:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:59:52.719 ************************************ 00:59:52.719 END TEST default_setup 00:59:52.719 ************************************ 00:59:52.719 10:56:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:59:52.719 10:56:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:59:52.719 10:56:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:52.719 10:56:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:52.719 10:56:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:59:52.719 ************************************ 00:59:52.719 START TEST per_node_1G_alloc 00:59:52.719 ************************************ 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:59:52.719 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:59:52.720 10:56:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:52.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:52.980 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:52.980 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594296 kB' 'MemAvailable: 10548240 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494632 kB' 'Inactive: 2784676 kB' 'Active(anon): 132408 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123500 kB' 'Mapped: 48892 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166456 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78488 kB' 'KernelStack: 6516 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.980 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.981 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594556 kB' 'MemAvailable: 10548500 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494412 kB' 'Inactive: 2784676 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123544 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166452 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6528 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.982 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.983 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:52.984 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594556 kB' 'MemAvailable: 10548500 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494280 kB' 'Inactive: 2784676 kB' 'Active(anon): 132056 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123448 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166452 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6528 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.246 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:59:53.247 nr_hugepages=512 00:59:53.247 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:59:53.247 resv_hugepages=0 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:59:53.248 surplus_hugepages=0 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:59:53.248 anon_hugepages=0 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594556 kB' 'MemAvailable: 10548500 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 494460 kB' 'Inactive: 2784676 kB' 'Active(anon): 132236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123368 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166452 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6512 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.248 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.249 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594624 kB' 'MemUsed: 4647356 kB' 'SwapCached: 0 kB' 'Active: 494500 kB' 'Inactive: 2784676 kB' 'Active(anon): 132276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3157368 kB' 'Mapped: 48824 kB' 'AnonPages: 123404 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87968 kB' 'Slab: 166452 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.250 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:59:53.251 node0=512 expecting 512 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:59:53.251 00:59:53.251 real 0m0.550s 00:59:53.251 user 0m0.269s 00:59:53.251 sys 0m0.316s 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:53.251 10:56:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:59:53.251 ************************************ 00:59:53.251 END TEST per_node_1G_alloc 00:59:53.251 ************************************ 00:59:53.251 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:59:53.251 10:56:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:59:53.251 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:53.251 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:53.251 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:59:53.251 ************************************ 00:59:53.251 START TEST even_2G_alloc 00:59:53.251 ************************************ 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:59:53.251 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:53.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:53.508 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:53.508 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548248 kB' 'MemAvailable: 9502196 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494868 kB' 'Inactive: 2784680 kB' 'Active(anon): 132644 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123784 kB' 'Mapped: 48952 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166516 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78548 kB' 'KernelStack: 6552 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.768 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548248 kB' 'MemAvailable: 9502196 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494764 kB' 'Inactive: 2784680 kB' 'Active(anon): 132540 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123672 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166508 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78540 kB' 'KernelStack: 6512 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.769 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548248 kB' 'MemAvailable: 9502196 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494336 kB' 'Inactive: 2784680 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123244 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166508 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78540 kB' 'KernelStack: 6512 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.770 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:59:53.771 nr_hugepages=1024 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:59:53.771 resv_hugepages=0 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:59:53.771 surplus_hugepages=0 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:59:53.771 anon_hugepages=0 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548248 kB' 'MemAvailable: 9502196 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494596 kB' 'Inactive: 2784680 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123504 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166508 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78540 kB' 'KernelStack: 6512 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.771 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548248 kB' 'MemUsed: 5693732 kB' 'SwapCached: 0 kB' 'Active: 494460 kB' 'Inactive: 2784680 kB' 'Active(anon): 132236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3157372 kB' 'Mapped: 48824 kB' 'AnonPages: 123424 kB' 'Shmem: 10468 kB' 'KernelStack: 6480 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87968 kB' 'Slab: 166508 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.772 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:59:53.773 node0=1024 expecting 1024 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:59:53.773 00:59:53.773 real 0m0.566s 00:59:53.773 user 0m0.266s 00:59:53.773 sys 0m0.308s 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:53.773 10:56:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:59:53.773 ************************************ 00:59:53.773 END TEST even_2G_alloc 00:59:53.773 ************************************ 00:59:53.773 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:59:53.773 10:56:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:59:53.773 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:53.773 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:53.773 10:56:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:59:53.773 ************************************ 00:59:53.773 START TEST odd_alloc 00:59:53.773 ************************************ 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:59:53.773 10:56:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:54.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:54.340 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:54.340 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6540208 kB' 'MemAvailable: 9494156 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494844 kB' 'Inactive: 2784680 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123724 kB' 'Mapped: 48936 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166548 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78580 kB' 'KernelStack: 6504 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.340 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6540208 kB' 'MemAvailable: 9494156 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494392 kB' 'Inactive: 2784680 kB' 'Active(anon): 132168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123368 kB' 'Mapped: 48936 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166544 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78576 kB' 'KernelStack: 6536 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.341 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.342 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6540208 kB' 'MemAvailable: 9494156 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494736 kB' 'Inactive: 2784680 kB' 'Active(anon): 132512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123652 kB' 'Mapped: 48936 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166544 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78576 kB' 'KernelStack: 6520 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.343 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.344 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:59:54.345 nr_hugepages=1025 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:59:54.345 resv_hugepages=0 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:59:54.345 surplus_hugepages=0 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:59:54.345 anon_hugepages=0 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6540568 kB' 'MemAvailable: 9494516 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494340 kB' 'Inactive: 2784680 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123272 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166548 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78580 kB' 'KernelStack: 6528 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.345 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:54.346 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6540828 kB' 'MemUsed: 5701152 kB' 'SwapCached: 0 kB' 'Active: 494600 kB' 'Inactive: 2784680 kB' 'Active(anon): 132376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3157372 kB' 'Mapped: 48828 kB' 'AnonPages: 123532 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87968 kB' 'Slab: 166548 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.347 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:59:54.348 node0=1025 expecting 1025 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:59:54.348 00:59:54.348 real 0m0.538s 00:59:54.348 user 0m0.269s 00:59:54.348 sys 0m0.306s 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:54.348 10:56:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:59:54.348 ************************************ 00:59:54.348 END TEST odd_alloc 00:59:54.348 ************************************ 00:59:54.348 10:56:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:59:54.348 10:56:59 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:59:54.348 10:56:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:54.348 10:56:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:54.348 10:56:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:59:54.348 ************************************ 00:59:54.348 START TEST custom_alloc 00:59:54.348 ************************************ 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:59:54.348 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:54.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:54.942 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:54.942 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594208 kB' 'MemAvailable: 10548156 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 495216 kB' 'Inactive: 2784680 kB' 'Active(anon): 132992 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 124088 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166556 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78588 kB' 'KernelStack: 6584 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.942 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.943 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594208 kB' 'MemAvailable: 10548156 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494844 kB' 'Inactive: 2784680 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123744 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166572 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78604 kB' 'KernelStack: 6512 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.944 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594208 kB' 'MemAvailable: 10548156 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494556 kB' 'Inactive: 2784680 kB' 'Active(anon): 132332 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123400 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166568 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78600 kB' 'KernelStack: 6480 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.945 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.946 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.959 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.960 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:59:54.961 nr_hugepages=512 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:59:54.961 resv_hugepages=0 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:59:54.961 surplus_hugepages=0 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:59:54.961 anon_hugepages=0 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:54.961 10:56:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594468 kB' 'MemAvailable: 10548416 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494816 kB' 'Inactive: 2784680 kB' 'Active(anon): 132592 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123660 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166568 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78600 kB' 'KernelStack: 6480 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.961 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.962 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594588 kB' 'MemUsed: 4647392 kB' 'SwapCached: 0 kB' 'Active: 494576 kB' 'Inactive: 2784680 kB' 'Active(anon): 132352 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3157372 kB' 'Mapped: 48828 kB' 'AnonPages: 123456 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87968 kB' 'Slab: 166556 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.963 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:59:54.964 node0=512 expecting 512 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:59:54.964 00:59:54.964 real 0m0.532s 00:59:54.964 user 0m0.265s 00:59:54.964 sys 0m0.301s 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:54.964 10:57:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:59:54.964 ************************************ 00:59:54.964 END TEST custom_alloc 00:59:54.964 ************************************ 00:59:54.964 10:57:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:59:54.964 10:57:00 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:59:54.964 10:57:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:54.964 10:57:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:54.964 10:57:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:59:54.964 ************************************ 00:59:54.964 START TEST no_shrink_alloc 00:59:54.964 ************************************ 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:59:54.964 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:59:54.965 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:55.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:55.540 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:55.540 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:55.540 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6545520 kB' 'MemAvailable: 9499468 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 495048 kB' 'Inactive: 2784680 kB' 'Active(anon): 132824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123668 kB' 'Mapped: 48932 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166600 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78632 kB' 'KernelStack: 6488 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.541 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6546040 kB' 'MemAvailable: 9499988 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 494660 kB' 'Inactive: 2784680 kB' 'Active(anon): 132436 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123504 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166608 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78640 kB' 'KernelStack: 6528 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.542 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.543 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6547176 kB' 'MemAvailable: 9501124 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 489576 kB' 'Inactive: 2784680 kB' 'Active(anon): 127352 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118480 kB' 'Mapped: 48088 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166516 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78548 kB' 'KernelStack: 6416 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.544 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.545 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:55.546 nr_hugepages=1024 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:59:55.546 resv_hugepages=0 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:59:55.546 surplus_hugepages=0 00:59:55.546 anon_hugepages=0 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6547176 kB' 'MemAvailable: 9501124 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 489448 kB' 'Inactive: 2784680 kB' 'Active(anon): 127224 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118444 kB' 'Mapped: 48348 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166468 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78500 kB' 'KernelStack: 6448 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.546 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:59:55.547 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6547176 kB' 'MemUsed: 5694804 kB' 'SwapCached: 0 kB' 'Active: 489260 kB' 'Inactive: 2784680 kB' 'Active(anon): 127036 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 3157372 kB' 'Mapped: 48088 kB' 'AnonPages: 118204 kB' 'Shmem: 10468 kB' 'KernelStack: 6384 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87968 kB' 'Slab: 166420 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.548 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:55.549 node0=1024 expecting 1024 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:59:55.549 10:57:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:55.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:56.069 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:56.069 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:59:56.069 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:59:56.069 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:59:56.069 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:59:56.069 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:59:56.069 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:59:56.069 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:59:56.069 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548196 kB' 'MemAvailable: 9502144 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 489512 kB' 'Inactive: 2784680 kB' 'Active(anon): 127288 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118440 kB' 'Mapped: 48304 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166364 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78396 kB' 'KernelStack: 6432 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.070 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548196 kB' 'MemAvailable: 9502144 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 489252 kB' 'Inactive: 2784680 kB' 'Active(anon): 127028 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118192 kB' 'Mapped: 48088 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166364 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78396 kB' 'KernelStack: 6416 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.071 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.072 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.084 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548252 kB' 'MemAvailable: 9502196 kB' 'Buffers: 2436 kB' 'Cached: 3154932 kB' 'SwapCached: 0 kB' 'Active: 489320 kB' 'Inactive: 2784676 kB' 'Active(anon): 127096 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118332 kB' 'Mapped: 48088 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166356 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78388 kB' 'KernelStack: 6368 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.085 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.086 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:56.087 nr_hugepages=1024 00:59:56.087 resv_hugepages=0 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:59:56.087 surplus_hugepages=0 00:59:56.087 anon_hugepages=0 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548252 kB' 'MemAvailable: 9502200 kB' 'Buffers: 2436 kB' 'Cached: 3154936 kB' 'SwapCached: 0 kB' 'Active: 489212 kB' 'Inactive: 2784680 kB' 'Active(anon): 126988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118152 kB' 'Mapped: 48088 kB' 'Shmem: 10468 kB' 'KReclaimable: 87968 kB' 'Slab: 166356 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78388 kB' 'KernelStack: 6384 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.087 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:59:56.088 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6548252 kB' 'MemUsed: 5693728 kB' 'SwapCached: 0 kB' 'Active: 489248 kB' 'Inactive: 2784680 kB' 'Active(anon): 127024 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2784680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 3157372 kB' 'Mapped: 48088 kB' 'AnonPages: 118152 kB' 'Shmem: 10468 kB' 'KernelStack: 6384 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87968 kB' 'Slab: 166356 kB' 'SReclaimable: 87968 kB' 'SUnreclaim: 78388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.089 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:59:56.090 node0=1024 expecting 1024 00:59:56.090 ************************************ 00:59:56.090 END TEST no_shrink_alloc 00:59:56.090 ************************************ 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:59:56.090 00:59:56.090 real 0m1.115s 00:59:56.090 user 0m0.532s 00:59:56.090 sys 0m0.600s 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:56.090 10:57:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:59:56.090 10:57:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:59:56.090 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:59:56.090 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:59:56.090 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:59:56.090 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:59:56.348 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:59:56.348 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:59:56.348 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:59:56.348 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:59:56.348 10:57:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:59:56.348 00:59:56.348 real 0m4.748s 00:59:56.348 user 0m2.208s 00:59:56.348 sys 0m2.590s 00:59:56.348 10:57:01 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:56.348 10:57:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:59:56.348 ************************************ 00:59:56.348 END TEST hugepages 00:59:56.348 ************************************ 00:59:56.348 10:57:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:59:56.348 10:57:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:59:56.348 10:57:01 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:56.348 10:57:01 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:56.348 10:57:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:59:56.348 ************************************ 00:59:56.348 START TEST driver 00:59:56.348 ************************************ 00:59:56.348 10:57:01 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:59:56.348 * Looking for test storage... 00:59:56.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:59:56.348 10:57:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:59:56.348 10:57:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:59:56.348 10:57:01 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:56.913 10:57:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:59:56.913 10:57:01 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:56.913 10:57:01 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:56.913 10:57:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:59:56.913 ************************************ 00:59:56.913 START TEST guess_driver 00:59:56.913 ************************************ 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:59:56.913 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:59:56.913 Looking for driver=uio_pci_generic 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:59:56.913 10:57:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:59:57.479 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:59:57.479 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:59:57.479 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:59:57.737 10:57:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:58.304 00:59:58.304 real 0m1.425s 00:59:58.304 user 0m0.562s 00:59:58.304 sys 0m0.866s 00:59:58.304 10:57:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:58.304 10:57:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:59:58.304 ************************************ 00:59:58.304 END TEST guess_driver 00:59:58.304 ************************************ 00:59:58.304 10:57:03 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:59:58.304 00:59:58.304 real 0m2.114s 00:59:58.304 user 0m0.818s 00:59:58.304 sys 0m1.357s 00:59:58.304 ************************************ 00:59:58.304 END TEST driver 00:59:58.304 ************************************ 00:59:58.304 10:57:03 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:59:58.304 10:57:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:59:58.304 10:57:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:59:58.304 10:57:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:59:58.304 10:57:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:58.304 10:57:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:58.304 10:57:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:59:58.304 ************************************ 00:59:58.304 START TEST devices 00:59:58.304 ************************************ 00:59:58.304 10:57:03 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:59:58.579 * Looking for test storage... 00:59:58.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:59:58.579 10:57:03 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:59:58.579 10:57:03 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:59:58.579 10:57:03 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:59:58.579 10:57:03 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:59.143 10:57:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:59:59.143 10:57:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:59:59.144 10:57:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:59:59.144 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:59:59.144 10:57:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:59:59.144 10:57:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:59:59.401 No valid GPT data, bailing 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:59:59.401 No valid GPT data, bailing 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:59:59.401 No valid GPT data, bailing 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:59:59.401 10:57:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:59:59.401 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:59:59.401 10:57:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:59:59.402 10:57:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:59:59.402 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:59:59.402 10:57:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:59:59.402 10:57:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:59:59.659 No valid GPT data, bailing 00:59:59.659 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:59:59.659 10:57:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:59:59.659 10:57:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:59:59.659 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:59:59.659 10:57:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:59:59.659 10:57:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:59:59.659 10:57:04 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:59:59.659 10:57:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:59:59.659 10:57:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:59:59.659 10:57:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:59:59.659 10:57:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:59:59.659 10:57:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:59:59.659 10:57:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:59:59.659 10:57:04 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:59:59.659 10:57:04 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:59:59.659 10:57:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:59:59.659 ************************************ 00:59:59.659 START TEST nvme_mount 00:59:59.659 ************************************ 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:59:59.659 10:57:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 01:00:00.592 Creating new GPT entries in memory. 01:00:00.592 GPT data structures destroyed! You may now partition the disk using fdisk or 01:00:00.592 other utilities. 01:00:00.592 10:57:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 01:00:00.592 10:57:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:00:00.592 10:57:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 01:00:00.592 10:57:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 01:00:00.592 10:57:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 01:00:01.525 Creating new GPT entries in memory. 01:00:01.525 The operation has completed successfully. 01:00:01.525 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 01:00:01.525 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:00:01.525 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71532 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:01.783 10:57:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:02.041 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:02.041 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:02.041 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:02.041 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 01:00:02.300 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 01:00:02.300 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 01:00:02.558 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 01:00:02.558 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 01:00:02.558 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 01:00:02.558 /dev/nvme0n1: calling ioctl to re-read partition table: Success 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:00:02.559 10:57:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:00:02.816 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:02.817 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 01:00:02.817 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 01:00:02.817 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:02.817 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:02.817 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:02.817 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:02.817 10:57:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:00:03.074 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:00:03.075 10:57:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:00:03.348 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:03.348 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 01:00:03.348 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 01:00:03.348 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:03.348 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:03.348 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 01:00:03.606 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 01:00:03.606 01:00:03.606 real 0m4.069s 01:00:03.606 user 0m0.740s 01:00:03.606 sys 0m1.074s 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:03.606 10:57:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 01:00:03.606 ************************************ 01:00:03.606 END TEST nvme_mount 01:00:03.606 ************************************ 01:00:03.606 10:57:08 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 01:00:03.606 10:57:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 01:00:03.606 10:57:08 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:03.606 10:57:08 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:03.606 10:57:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 01:00:03.606 ************************************ 01:00:03.606 START TEST dm_mount 01:00:03.606 ************************************ 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 01:00:03.606 10:57:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 01:00:04.981 Creating new GPT entries in memory. 01:00:04.981 GPT data structures destroyed! You may now partition the disk using fdisk or 01:00:04.981 other utilities. 01:00:04.981 10:57:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 01:00:04.981 10:57:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:00:04.981 10:57:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 01:00:04.981 10:57:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 01:00:04.981 10:57:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 01:00:05.915 Creating new GPT entries in memory. 01:00:05.915 The operation has completed successfully. 01:00:05.915 10:57:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 01:00:05.915 10:57:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:00:05.915 10:57:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 01:00:05.915 10:57:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 01:00:05.915 10:57:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 01:00:06.853 The operation has completed successfully. 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71968 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:06.853 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:00:06.854 10:57:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.120 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 01:00:07.377 10:57:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.635 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 01:00:07.893 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 01:00:07.893 10:57:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 01:00:07.893 01:00:07.893 real 0m4.219s 01:00:07.893 user 0m0.461s 01:00:07.893 sys 0m0.709s 01:00:07.893 10:57:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:07.893 ************************************ 01:00:07.893 END TEST dm_mount 01:00:07.893 ************************************ 01:00:07.893 10:57:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 01:00:07.893 10:57:13 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 01:00:07.893 10:57:13 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 01:00:07.894 10:57:13 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 01:00:07.894 10:57:13 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 01:00:07.894 10:57:13 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 01:00:07.894 10:57:13 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 01:00:07.894 10:57:13 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 01:00:07.894 10:57:13 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 01:00:08.150 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 01:00:08.150 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 01:00:08.150 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 01:00:08.150 /dev/nvme0n1: calling ioctl to re-read partition table: Success 01:00:08.150 10:57:13 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 01:00:08.150 10:57:13 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 01:00:08.150 10:57:13 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 01:00:08.150 10:57:13 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 01:00:08.150 10:57:13 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 01:00:08.150 10:57:13 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 01:00:08.150 10:57:13 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 01:00:08.150 ************************************ 01:00:08.150 END TEST devices 01:00:08.150 ************************************ 01:00:08.150 01:00:08.150 real 0m9.852s 01:00:08.150 user 0m1.886s 01:00:08.150 sys 0m2.370s 01:00:08.150 10:57:13 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:08.151 10:57:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 01:00:08.408 10:57:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 01:00:08.408 ************************************ 01:00:08.408 END TEST setup.sh 01:00:08.408 ************************************ 01:00:08.408 01:00:08.408 real 0m21.646s 01:00:08.408 user 0m7.074s 01:00:08.408 sys 0m9.041s 01:00:08.408 10:57:13 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:08.408 10:57:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 01:00:08.408 10:57:13 -- common/autotest_common.sh@1142 -- # return 0 01:00:08.408 10:57:13 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:00:08.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:00:08.973 Hugepages 01:00:08.973 node hugesize free / total 01:00:08.973 node0 1048576kB 0 / 0 01:00:08.973 node0 2048kB 2048 / 2048 01:00:08.973 01:00:08.973 Type BDF Vendor Device NUMA Driver Device Block devices 01:00:08.973 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:00:09.231 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:00:09.231 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 01:00:09.231 10:57:14 -- spdk/autotest.sh@130 -- # uname -s 01:00:09.231 10:57:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 01:00:09.231 10:57:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 01:00:09.231 10:57:14 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:00:09.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:00:10.054 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:00:10.054 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:00:10.054 10:57:15 -- common/autotest_common.sh@1532 -- # sleep 1 01:00:10.987 10:57:16 -- common/autotest_common.sh@1533 -- # bdfs=() 01:00:10.987 10:57:16 -- common/autotest_common.sh@1533 -- # local bdfs 01:00:10.987 10:57:16 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 01:00:10.987 10:57:16 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 01:00:10.987 10:57:16 -- common/autotest_common.sh@1513 -- # bdfs=() 01:00:10.987 10:57:16 -- common/autotest_common.sh@1513 -- # local bdfs 01:00:10.987 10:57:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:00:10.987 10:57:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:00:10.987 10:57:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:00:11.245 10:57:16 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:00:11.245 10:57:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:00:11.245 10:57:16 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:00:11.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:00:11.502 Waiting for block devices as requested 01:00:11.502 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:00:11.760 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:00:11.760 10:57:16 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 01:00:11.760 10:57:16 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 01:00:11.760 10:57:16 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # grep oacs 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # cut -d: -f2 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 01:00:11.760 10:57:16 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 01:00:11.760 10:57:16 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # grep unvmcap 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # cut -d: -f2 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 01:00:11.760 10:57:16 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1557 -- # continue 01:00:11.760 10:57:16 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 01:00:11.760 10:57:16 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:00:11.760 10:57:16 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 01:00:11.760 10:57:16 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # grep oacs 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # cut -d: -f2 01:00:11.760 10:57:16 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 01:00:11.760 10:57:16 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 01:00:11.760 10:57:16 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # grep unvmcap 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # cut -d: -f2 01:00:11.760 10:57:16 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 01:00:11.760 10:57:16 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 01:00:11.760 10:57:16 -- common/autotest_common.sh@1557 -- # continue 01:00:11.760 10:57:16 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 01:00:11.760 10:57:16 -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:11.760 10:57:16 -- common/autotest_common.sh@10 -- # set +x 01:00:11.760 10:57:16 -- spdk/autotest.sh@138 -- # timing_enter afterboot 01:00:11.760 10:57:16 -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:11.760 10:57:16 -- common/autotest_common.sh@10 -- # set +x 01:00:11.760 10:57:16 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:00:12.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:00:12.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:00:12.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:00:12.694 10:57:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 01:00:12.694 10:57:17 -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:12.694 10:57:17 -- common/autotest_common.sh@10 -- # set +x 01:00:12.694 10:57:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 01:00:12.694 10:57:17 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 01:00:12.694 10:57:17 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 01:00:12.694 10:57:17 -- common/autotest_common.sh@1577 -- # bdfs=() 01:00:12.694 10:57:17 -- common/autotest_common.sh@1577 -- # local bdfs 01:00:12.694 10:57:17 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 01:00:12.694 10:57:17 -- common/autotest_common.sh@1513 -- # bdfs=() 01:00:12.694 10:57:17 -- common/autotest_common.sh@1513 -- # local bdfs 01:00:12.694 10:57:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:00:12.694 10:57:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:00:12.694 10:57:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:00:12.694 10:57:17 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:00:12.694 10:57:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:00:12.694 10:57:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 01:00:12.694 10:57:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 01:00:12.694 10:57:17 -- common/autotest_common.sh@1580 -- # device=0x0010 01:00:12.694 10:57:17 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:00:12.694 10:57:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 01:00:12.694 10:57:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 01:00:12.694 10:57:17 -- common/autotest_common.sh@1580 -- # device=0x0010 01:00:12.694 10:57:17 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:00:12.694 10:57:17 -- common/autotest_common.sh@1586 -- # printf '%s\n' 01:00:12.694 10:57:17 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 01:00:12.694 10:57:17 -- common/autotest_common.sh@1593 -- # return 0 01:00:12.694 10:57:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 01:00:12.694 10:57:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 01:00:12.694 10:57:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 01:00:12.694 10:57:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 01:00:12.694 10:57:17 -- spdk/autotest.sh@162 -- # timing_enter lib 01:00:12.694 10:57:17 -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:12.694 10:57:17 -- common/autotest_common.sh@10 -- # set +x 01:00:12.694 10:57:17 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 01:00:12.694 10:57:17 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:00:12.694 10:57:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:12.694 10:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:12.694 10:57:17 -- common/autotest_common.sh@10 -- # set +x 01:00:12.952 ************************************ 01:00:12.952 START TEST env 01:00:12.952 ************************************ 01:00:12.952 10:57:17 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:00:12.952 * Looking for test storage... 01:00:12.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 01:00:12.952 10:57:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:00:12.952 10:57:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:12.952 10:57:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:12.952 10:57:17 env -- common/autotest_common.sh@10 -- # set +x 01:00:12.952 ************************************ 01:00:12.952 START TEST env_memory 01:00:12.952 ************************************ 01:00:12.952 10:57:18 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:00:12.952 01:00:12.952 01:00:12.952 CUnit - A unit testing framework for C - Version 2.1-3 01:00:12.952 http://cunit.sourceforge.net/ 01:00:12.952 01:00:12.952 01:00:12.952 Suite: memory 01:00:12.952 Test: alloc and free memory map ...[2024-07-22 10:57:18.050514] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:00:12.952 passed 01:00:12.952 Test: mem map translation ...[2024-07-22 10:57:18.082442] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 01:00:12.953 [2024-07-22 10:57:18.082650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 01:00:12.953 [2024-07-22 10:57:18.082813] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:00:12.953 [2024-07-22 10:57:18.082829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 01:00:12.953 passed 01:00:12.953 Test: mem map registration ...[2024-07-22 10:57:18.147415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 01:00:12.953 [2024-07-22 10:57:18.147458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 01:00:13.211 passed 01:00:13.211 Test: mem map adjacent registrations ...passed 01:00:13.211 01:00:13.211 Run Summary: Type Total Ran Passed Failed Inactive 01:00:13.211 suites 1 1 n/a 0 0 01:00:13.211 tests 4 4 4 0 0 01:00:13.211 asserts 152 152 152 0 n/a 01:00:13.211 01:00:13.211 Elapsed time = 0.216 seconds 01:00:13.211 01:00:13.211 real 0m0.235s 01:00:13.211 user 0m0.217s 01:00:13.211 sys 0m0.014s 01:00:13.211 10:57:18 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:13.211 10:57:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 01:00:13.211 ************************************ 01:00:13.211 END TEST env_memory 01:00:13.211 ************************************ 01:00:13.211 10:57:18 env -- common/autotest_common.sh@1142 -- # return 0 01:00:13.211 10:57:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:00:13.211 10:57:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:13.211 10:57:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:13.211 10:57:18 env -- common/autotest_common.sh@10 -- # set +x 01:00:13.211 ************************************ 01:00:13.211 START TEST env_vtophys 01:00:13.211 ************************************ 01:00:13.211 10:57:18 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:00:13.211 EAL: lib.eal log level changed from notice to debug 01:00:13.211 EAL: Detected lcore 0 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 1 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 2 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 3 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 4 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 5 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 6 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 7 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 8 as core 0 on socket 0 01:00:13.211 EAL: Detected lcore 9 as core 0 on socket 0 01:00:13.211 EAL: Maximum logical cores by configuration: 128 01:00:13.211 EAL: Detected CPU lcores: 10 01:00:13.211 EAL: Detected NUMA nodes: 1 01:00:13.211 EAL: Checking presence of .so 'librte_eal.so.24.0' 01:00:13.211 EAL: Detected shared linkage of DPDK 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 01:00:13.211 EAL: Registered [vdev] bus. 01:00:13.211 EAL: bus.vdev log level changed from disabled to notice 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 01:00:13.211 EAL: pmd.net.i40e.init log level changed from disabled to notice 01:00:13.211 EAL: pmd.net.i40e.driver log level changed from disabled to notice 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 01:00:13.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 01:00:13.211 EAL: No shared files mode enabled, IPC will be disabled 01:00:13.211 EAL: No shared files mode enabled, IPC is disabled 01:00:13.211 EAL: Selected IOVA mode 'PA' 01:00:13.211 EAL: Probing VFIO support... 01:00:13.211 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:00:13.211 EAL: VFIO modules not loaded, skipping VFIO support... 01:00:13.211 EAL: Ask a virtual area of 0x2e000 bytes 01:00:13.211 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 01:00:13.211 EAL: Setting up physically contiguous memory... 01:00:13.211 EAL: Setting maximum number of open files to 524288 01:00:13.211 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 01:00:13.211 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 01:00:13.211 EAL: Ask a virtual area of 0x61000 bytes 01:00:13.211 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 01:00:13.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:00:13.212 EAL: Ask a virtual area of 0x400000000 bytes 01:00:13.212 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 01:00:13.212 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 01:00:13.212 EAL: Ask a virtual area of 0x61000 bytes 01:00:13.212 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 01:00:13.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:00:13.212 EAL: Ask a virtual area of 0x400000000 bytes 01:00:13.212 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 01:00:13.212 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 01:00:13.212 EAL: Ask a virtual area of 0x61000 bytes 01:00:13.212 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 01:00:13.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:00:13.212 EAL: Ask a virtual area of 0x400000000 bytes 01:00:13.212 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 01:00:13.212 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 01:00:13.212 EAL: Ask a virtual area of 0x61000 bytes 01:00:13.212 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 01:00:13.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:00:13.212 EAL: Ask a virtual area of 0x400000000 bytes 01:00:13.212 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 01:00:13.212 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 01:00:13.212 EAL: Hugepages will be freed exactly as allocated. 01:00:13.212 EAL: No shared files mode enabled, IPC is disabled 01:00:13.212 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: TSC frequency is ~2200000 KHz 01:00:13.471 EAL: Main lcore 0 is ready (tid=7fb0de21ea00;cpuset=[0]) 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 0 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 2MB 01:00:13.471 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: No PCI address specified using 'addr=' in: bus=pci 01:00:13.471 EAL: Mem event callback 'spdk:(nil)' registered 01:00:13.471 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 01:00:13.471 01:00:13.471 01:00:13.471 CUnit - A unit testing framework for C - Version 2.1-3 01:00:13.471 http://cunit.sourceforge.net/ 01:00:13.471 01:00:13.471 01:00:13.471 Suite: components_suite 01:00:13.471 Test: vtophys_malloc_test ...passed 01:00:13.471 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 4MB 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was shrunk by 4MB 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 6MB 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was shrunk by 6MB 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 10MB 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was shrunk by 10MB 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 18MB 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was shrunk by 18MB 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 34MB 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was shrunk by 34MB 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 66MB 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was shrunk by 66MB 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 130MB 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was shrunk by 130MB 01:00:13.471 EAL: Trying to obtain current memory policy. 01:00:13.471 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.471 EAL: Restoring previous memory policy: 4 01:00:13.471 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.471 EAL: request: mp_malloc_sync 01:00:13.471 EAL: No shared files mode enabled, IPC is disabled 01:00:13.471 EAL: Heap on socket 0 was expanded by 258MB 01:00:13.730 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.730 EAL: request: mp_malloc_sync 01:00:13.730 EAL: No shared files mode enabled, IPC is disabled 01:00:13.730 EAL: Heap on socket 0 was shrunk by 258MB 01:00:13.730 EAL: Trying to obtain current memory policy. 01:00:13.730 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:13.730 EAL: Restoring previous memory policy: 4 01:00:13.730 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.730 EAL: request: mp_malloc_sync 01:00:13.730 EAL: No shared files mode enabled, IPC is disabled 01:00:13.730 EAL: Heap on socket 0 was expanded by 514MB 01:00:13.988 EAL: Calling mem event callback 'spdk:(nil)' 01:00:13.988 EAL: request: mp_malloc_sync 01:00:13.988 EAL: No shared files mode enabled, IPC is disabled 01:00:13.988 EAL: Heap on socket 0 was shrunk by 514MB 01:00:13.988 EAL: Trying to obtain current memory policy. 01:00:13.988 EAL: Setting policy MPOL_PREFERRED for socket 0 01:00:14.246 EAL: Restoring previous memory policy: 4 01:00:14.246 EAL: Calling mem event callback 'spdk:(nil)' 01:00:14.246 EAL: request: mp_malloc_sync 01:00:14.246 EAL: No shared files mode enabled, IPC is disabled 01:00:14.246 EAL: Heap on socket 0 was expanded by 1026MB 01:00:14.504 EAL: Calling mem event callback 'spdk:(nil)' 01:00:14.762 passed 01:00:14.762 01:00:14.762 Run Summary: Type Total Ran Passed Failed Inactive 01:00:14.762 suites 1 1 n/a 0 0 01:00:14.762 tests 2 2 2 0 0 01:00:14.762 asserts 5274 5274 5274 0 n/a 01:00:14.762 01:00:14.762 Elapsed time = 1.367 seconds 01:00:14.762 EAL: request: mp_malloc_sync 01:00:14.762 EAL: No shared files mode enabled, IPC is disabled 01:00:14.762 EAL: Heap on socket 0 was shrunk by 1026MB 01:00:14.762 EAL: Calling mem event callback 'spdk:(nil)' 01:00:14.762 EAL: request: mp_malloc_sync 01:00:14.762 EAL: No shared files mode enabled, IPC is disabled 01:00:14.762 EAL: Heap on socket 0 was shrunk by 2MB 01:00:14.762 EAL: No shared files mode enabled, IPC is disabled 01:00:14.762 EAL: No shared files mode enabled, IPC is disabled 01:00:14.762 EAL: No shared files mode enabled, IPC is disabled 01:00:14.762 ************************************ 01:00:14.762 END TEST env_vtophys 01:00:14.762 ************************************ 01:00:14.762 01:00:14.762 real 0m1.567s 01:00:14.762 user 0m0.884s 01:00:14.762 sys 0m0.547s 01:00:14.762 10:57:19 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:14.762 10:57:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 01:00:14.762 10:57:19 env -- common/autotest_common.sh@1142 -- # return 0 01:00:14.762 10:57:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:00:14.762 10:57:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:14.762 10:57:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:14.762 10:57:19 env -- common/autotest_common.sh@10 -- # set +x 01:00:14.762 ************************************ 01:00:14.762 START TEST env_pci 01:00:14.762 ************************************ 01:00:14.762 10:57:19 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:00:14.762 01:00:14.762 01:00:14.762 CUnit - A unit testing framework for C - Version 2.1-3 01:00:14.762 http://cunit.sourceforge.net/ 01:00:14.762 01:00:14.762 01:00:14.762 Suite: pci 01:00:14.762 Test: pci_hook ...[2024-07-22 10:57:19.918463] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73167 has claimed it 01:00:14.762 passed 01:00:14.762 01:00:14.762 Run Summary: Type Total Ran Passed Failed Inactive 01:00:14.762 suites 1 1 n/a 0 0 01:00:14.762 tests 1 1 1 0 0 01:00:14.762 asserts 25 25 25 0 n/a 01:00:14.762 01:00:14.762 Elapsed time = 0.003 seconds 01:00:14.762 EAL: Cannot find device (10000:00:01.0) 01:00:14.762 EAL: Failed to attach device on primary process 01:00:14.762 01:00:14.762 real 0m0.022s 01:00:14.762 user 0m0.008s 01:00:14.762 sys 0m0.013s 01:00:14.762 10:57:19 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:14.762 10:57:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 01:00:14.762 ************************************ 01:00:14.762 END TEST env_pci 01:00:14.762 ************************************ 01:00:14.762 10:57:19 env -- common/autotest_common.sh@1142 -- # return 0 01:00:14.762 10:57:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 01:00:14.762 10:57:19 env -- env/env.sh@15 -- # uname 01:00:15.020 10:57:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 01:00:15.020 10:57:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 01:00:15.020 10:57:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:00:15.020 10:57:19 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 01:00:15.020 10:57:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:15.020 10:57:19 env -- common/autotest_common.sh@10 -- # set +x 01:00:15.020 ************************************ 01:00:15.020 START TEST env_dpdk_post_init 01:00:15.020 ************************************ 01:00:15.020 10:57:19 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:00:15.020 EAL: Detected CPU lcores: 10 01:00:15.020 EAL: Detected NUMA nodes: 1 01:00:15.020 EAL: Detected shared linkage of DPDK 01:00:15.020 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:00:15.020 EAL: Selected IOVA mode 'PA' 01:00:15.020 TELEMETRY: No legacy callbacks, legacy socket not created 01:00:15.020 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 01:00:15.020 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 01:00:15.020 Starting DPDK initialization... 01:00:15.020 Starting SPDK post initialization... 01:00:15.020 SPDK NVMe probe 01:00:15.020 Attaching to 0000:00:10.0 01:00:15.020 Attaching to 0000:00:11.0 01:00:15.020 Attached to 0000:00:10.0 01:00:15.020 Attached to 0000:00:11.0 01:00:15.020 Cleaning up... 01:00:15.020 ************************************ 01:00:15.020 END TEST env_dpdk_post_init 01:00:15.020 ************************************ 01:00:15.020 01:00:15.020 real 0m0.181s 01:00:15.020 user 0m0.055s 01:00:15.020 sys 0m0.025s 01:00:15.020 10:57:20 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:15.020 10:57:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 01:00:15.020 10:57:20 env -- common/autotest_common.sh@1142 -- # return 0 01:00:15.020 10:57:20 env -- env/env.sh@26 -- # uname 01:00:15.020 10:57:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 01:00:15.020 10:57:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:00:15.020 10:57:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:15.020 10:57:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:15.020 10:57:20 env -- common/autotest_common.sh@10 -- # set +x 01:00:15.020 ************************************ 01:00:15.020 START TEST env_mem_callbacks 01:00:15.020 ************************************ 01:00:15.020 10:57:20 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:00:15.278 EAL: Detected CPU lcores: 10 01:00:15.278 EAL: Detected NUMA nodes: 1 01:00:15.278 EAL: Detected shared linkage of DPDK 01:00:15.278 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:00:15.278 EAL: Selected IOVA mode 'PA' 01:00:15.278 TELEMETRY: No legacy callbacks, legacy socket not created 01:00:15.278 01:00:15.278 01:00:15.278 CUnit - A unit testing framework for C - Version 2.1-3 01:00:15.278 http://cunit.sourceforge.net/ 01:00:15.278 01:00:15.278 01:00:15.278 Suite: memory 01:00:15.278 Test: test ... 01:00:15.278 register 0x200000200000 2097152 01:00:15.278 malloc 3145728 01:00:15.278 register 0x200000400000 4194304 01:00:15.278 buf 0x200000500000 len 3145728 PASSED 01:00:15.278 malloc 64 01:00:15.278 buf 0x2000004fff40 len 64 PASSED 01:00:15.278 malloc 4194304 01:00:15.278 register 0x200000800000 6291456 01:00:15.278 buf 0x200000a00000 len 4194304 PASSED 01:00:15.278 free 0x200000500000 3145728 01:00:15.278 free 0x2000004fff40 64 01:00:15.278 unregister 0x200000400000 4194304 PASSED 01:00:15.278 free 0x200000a00000 4194304 01:00:15.278 unregister 0x200000800000 6291456 PASSED 01:00:15.278 malloc 8388608 01:00:15.278 register 0x200000400000 10485760 01:00:15.278 buf 0x200000600000 len 8388608 PASSED 01:00:15.278 free 0x200000600000 8388608 01:00:15.278 unregister 0x200000400000 10485760 PASSED 01:00:15.278 passed 01:00:15.278 01:00:15.278 Run Summary: Type Total Ran Passed Failed Inactive 01:00:15.278 suites 1 1 n/a 0 0 01:00:15.278 tests 1 1 1 0 0 01:00:15.278 asserts 15 15 15 0 n/a 01:00:15.278 01:00:15.278 Elapsed time = 0.008 seconds 01:00:15.278 01:00:15.278 real 0m0.151s 01:00:15.278 user 0m0.017s 01:00:15.278 sys 0m0.031s 01:00:15.278 10:57:20 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:15.278 ************************************ 01:00:15.278 END TEST env_mem_callbacks 01:00:15.278 ************************************ 01:00:15.278 10:57:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 01:00:15.278 10:57:20 env -- common/autotest_common.sh@1142 -- # return 0 01:00:15.278 ************************************ 01:00:15.278 END TEST env 01:00:15.278 ************************************ 01:00:15.278 01:00:15.278 real 0m2.500s 01:00:15.278 user 0m1.302s 01:00:15.278 sys 0m0.841s 01:00:15.278 10:57:20 env -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:15.278 10:57:20 env -- common/autotest_common.sh@10 -- # set +x 01:00:15.278 10:57:20 -- common/autotest_common.sh@1142 -- # return 0 01:00:15.278 10:57:20 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:00:15.278 10:57:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:15.278 10:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:15.278 10:57:20 -- common/autotest_common.sh@10 -- # set +x 01:00:15.278 ************************************ 01:00:15.278 START TEST rpc 01:00:15.278 ************************************ 01:00:15.278 10:57:20 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:00:15.536 * Looking for test storage... 01:00:15.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:00:15.536 10:57:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73282 01:00:15.536 10:57:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 01:00:15.536 10:57:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:00:15.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:15.536 10:57:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73282 01:00:15.536 10:57:20 rpc -- common/autotest_common.sh@829 -- # '[' -z 73282 ']' 01:00:15.536 10:57:20 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:15.536 10:57:20 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:15.536 10:57:20 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:15.536 10:57:20 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:15.536 10:57:20 rpc -- common/autotest_common.sh@10 -- # set +x 01:00:15.536 [2024-07-22 10:57:20.601738] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:15.536 [2024-07-22 10:57:20.602304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73282 ] 01:00:15.536 [2024-07-22 10:57:20.740546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:15.794 [2024-07-22 10:57:20.833077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 01:00:15.794 [2024-07-22 10:57:20.833298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73282' to capture a snapshot of events at runtime. 01:00:15.794 [2024-07-22 10:57:20.833428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:15.794 [2024-07-22 10:57:20.833542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:15.794 [2024-07-22 10:57:20.833558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73282 for offline analysis/debug. 01:00:15.794 [2024-07-22 10:57:20.833592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:16.728 10:57:21 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:16.728 10:57:21 rpc -- common/autotest_common.sh@862 -- # return 0 01:00:16.728 10:57:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:00:16.728 10:57:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:00:16.728 10:57:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 01:00:16.728 10:57:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 01:00:16.728 10:57:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:16.728 10:57:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:16.728 10:57:21 rpc -- common/autotest_common.sh@10 -- # set +x 01:00:16.728 ************************************ 01:00:16.728 START TEST rpc_integrity 01:00:16.728 ************************************ 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.728 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.728 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:00:16.728 { 01:00:16.728 "aliases": [ 01:00:16.729 "f6e790e5-34d9-42b5-8127-30f41c3fc8ed" 01:00:16.729 ], 01:00:16.729 "assigned_rate_limits": { 01:00:16.729 "r_mbytes_per_sec": 0, 01:00:16.729 "rw_ios_per_sec": 0, 01:00:16.729 "rw_mbytes_per_sec": 0, 01:00:16.729 "w_mbytes_per_sec": 0 01:00:16.729 }, 01:00:16.729 "block_size": 512, 01:00:16.729 "claimed": false, 01:00:16.729 "driver_specific": {}, 01:00:16.729 "memory_domains": [ 01:00:16.729 { 01:00:16.729 "dma_device_id": "system", 01:00:16.729 "dma_device_type": 1 01:00:16.729 }, 01:00:16.729 { 01:00:16.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:00:16.729 "dma_device_type": 2 01:00:16.729 } 01:00:16.729 ], 01:00:16.729 "name": "Malloc0", 01:00:16.729 "num_blocks": 16384, 01:00:16.729 "product_name": "Malloc disk", 01:00:16.729 "supported_io_types": { 01:00:16.729 "abort": true, 01:00:16.729 "compare": false, 01:00:16.729 "compare_and_write": false, 01:00:16.729 "copy": true, 01:00:16.729 "flush": true, 01:00:16.729 "get_zone_info": false, 01:00:16.729 "nvme_admin": false, 01:00:16.729 "nvme_io": false, 01:00:16.729 "nvme_io_md": false, 01:00:16.729 "nvme_iov_md": false, 01:00:16.729 "read": true, 01:00:16.729 "reset": true, 01:00:16.729 "seek_data": false, 01:00:16.729 "seek_hole": false, 01:00:16.729 "unmap": true, 01:00:16.729 "write": true, 01:00:16.729 "write_zeroes": true, 01:00:16.729 "zcopy": true, 01:00:16.729 "zone_append": false, 01:00:16.729 "zone_management": false 01:00:16.729 }, 01:00:16.729 "uuid": "f6e790e5-34d9-42b5-8127-30f41c3fc8ed", 01:00:16.729 "zoned": false 01:00:16.729 } 01:00:16.729 ]' 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.729 [2024-07-22 10:57:21.809854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 01:00:16.729 [2024-07-22 10:57:21.809929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 01:00:16.729 [2024-07-22 10:57:21.809947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x112a390 01:00:16.729 [2024-07-22 10:57:21.809956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 01:00:16.729 [2024-07-22 10:57:21.811846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:00:16.729 [2024-07-22 10:57:21.811907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:00:16.729 Passthru0 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:00:16.729 { 01:00:16.729 "aliases": [ 01:00:16.729 "f6e790e5-34d9-42b5-8127-30f41c3fc8ed" 01:00:16.729 ], 01:00:16.729 "assigned_rate_limits": { 01:00:16.729 "r_mbytes_per_sec": 0, 01:00:16.729 "rw_ios_per_sec": 0, 01:00:16.729 "rw_mbytes_per_sec": 0, 01:00:16.729 "w_mbytes_per_sec": 0 01:00:16.729 }, 01:00:16.729 "block_size": 512, 01:00:16.729 "claim_type": "exclusive_write", 01:00:16.729 "claimed": true, 01:00:16.729 "driver_specific": {}, 01:00:16.729 "memory_domains": [ 01:00:16.729 { 01:00:16.729 "dma_device_id": "system", 01:00:16.729 "dma_device_type": 1 01:00:16.729 }, 01:00:16.729 { 01:00:16.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:00:16.729 "dma_device_type": 2 01:00:16.729 } 01:00:16.729 ], 01:00:16.729 "name": "Malloc0", 01:00:16.729 "num_blocks": 16384, 01:00:16.729 "product_name": "Malloc disk", 01:00:16.729 "supported_io_types": { 01:00:16.729 "abort": true, 01:00:16.729 "compare": false, 01:00:16.729 "compare_and_write": false, 01:00:16.729 "copy": true, 01:00:16.729 "flush": true, 01:00:16.729 "get_zone_info": false, 01:00:16.729 "nvme_admin": false, 01:00:16.729 "nvme_io": false, 01:00:16.729 "nvme_io_md": false, 01:00:16.729 "nvme_iov_md": false, 01:00:16.729 "read": true, 01:00:16.729 "reset": true, 01:00:16.729 "seek_data": false, 01:00:16.729 "seek_hole": false, 01:00:16.729 "unmap": true, 01:00:16.729 "write": true, 01:00:16.729 "write_zeroes": true, 01:00:16.729 "zcopy": true, 01:00:16.729 "zone_append": false, 01:00:16.729 "zone_management": false 01:00:16.729 }, 01:00:16.729 "uuid": "f6e790e5-34d9-42b5-8127-30f41c3fc8ed", 01:00:16.729 "zoned": false 01:00:16.729 }, 01:00:16.729 { 01:00:16.729 "aliases": [ 01:00:16.729 "5f9b51a0-aa12-5596-bc56-f856925a8be6" 01:00:16.729 ], 01:00:16.729 "assigned_rate_limits": { 01:00:16.729 "r_mbytes_per_sec": 0, 01:00:16.729 "rw_ios_per_sec": 0, 01:00:16.729 "rw_mbytes_per_sec": 0, 01:00:16.729 "w_mbytes_per_sec": 0 01:00:16.729 }, 01:00:16.729 "block_size": 512, 01:00:16.729 "claimed": false, 01:00:16.729 "driver_specific": { 01:00:16.729 "passthru": { 01:00:16.729 "base_bdev_name": "Malloc0", 01:00:16.729 "name": "Passthru0" 01:00:16.729 } 01:00:16.729 }, 01:00:16.729 "memory_domains": [ 01:00:16.729 { 01:00:16.729 "dma_device_id": "system", 01:00:16.729 "dma_device_type": 1 01:00:16.729 }, 01:00:16.729 { 01:00:16.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:00:16.729 "dma_device_type": 2 01:00:16.729 } 01:00:16.729 ], 01:00:16.729 "name": "Passthru0", 01:00:16.729 "num_blocks": 16384, 01:00:16.729 "product_name": "passthru", 01:00:16.729 "supported_io_types": { 01:00:16.729 "abort": true, 01:00:16.729 "compare": false, 01:00:16.729 "compare_and_write": false, 01:00:16.729 "copy": true, 01:00:16.729 "flush": true, 01:00:16.729 "get_zone_info": false, 01:00:16.729 "nvme_admin": false, 01:00:16.729 "nvme_io": false, 01:00:16.729 "nvme_io_md": false, 01:00:16.729 "nvme_iov_md": false, 01:00:16.729 "read": true, 01:00:16.729 "reset": true, 01:00:16.729 "seek_data": false, 01:00:16.729 "seek_hole": false, 01:00:16.729 "unmap": true, 01:00:16.729 "write": true, 01:00:16.729 "write_zeroes": true, 01:00:16.729 "zcopy": true, 01:00:16.729 "zone_append": false, 01:00:16.729 "zone_management": false 01:00:16.729 }, 01:00:16.729 "uuid": "5f9b51a0-aa12-5596-bc56-f856925a8be6", 01:00:16.729 "zoned": false 01:00:16.729 } 01:00:16.729 ]' 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.729 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.729 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.995 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.995 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:00:16.995 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 01:00:16.995 ************************************ 01:00:16.995 END TEST rpc_integrity 01:00:16.995 ************************************ 01:00:16.995 10:57:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:00:16.995 01:00:16.995 real 0m0.347s 01:00:16.995 user 0m0.232s 01:00:16.995 sys 0m0.035s 01:00:16.995 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:16.995 10:57:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:16.995 10:57:22 rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:16.995 10:57:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 01:00:16.995 10:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:16.995 10:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:16.995 10:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 01:00:16.995 ************************************ 01:00:16.995 START TEST rpc_plugins 01:00:16.995 ************************************ 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 01:00:16.995 { 01:00:16.995 "aliases": [ 01:00:16.995 "c14b1253-eb35-433b-b35f-f296f991f12a" 01:00:16.995 ], 01:00:16.995 "assigned_rate_limits": { 01:00:16.995 "r_mbytes_per_sec": 0, 01:00:16.995 "rw_ios_per_sec": 0, 01:00:16.995 "rw_mbytes_per_sec": 0, 01:00:16.995 "w_mbytes_per_sec": 0 01:00:16.995 }, 01:00:16.995 "block_size": 4096, 01:00:16.995 "claimed": false, 01:00:16.995 "driver_specific": {}, 01:00:16.995 "memory_domains": [ 01:00:16.995 { 01:00:16.995 "dma_device_id": "system", 01:00:16.995 "dma_device_type": 1 01:00:16.995 }, 01:00:16.995 { 01:00:16.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:00:16.995 "dma_device_type": 2 01:00:16.995 } 01:00:16.995 ], 01:00:16.995 "name": "Malloc1", 01:00:16.995 "num_blocks": 256, 01:00:16.995 "product_name": "Malloc disk", 01:00:16.995 "supported_io_types": { 01:00:16.995 "abort": true, 01:00:16.995 "compare": false, 01:00:16.995 "compare_and_write": false, 01:00:16.995 "copy": true, 01:00:16.995 "flush": true, 01:00:16.995 "get_zone_info": false, 01:00:16.995 "nvme_admin": false, 01:00:16.995 "nvme_io": false, 01:00:16.995 "nvme_io_md": false, 01:00:16.995 "nvme_iov_md": false, 01:00:16.995 "read": true, 01:00:16.995 "reset": true, 01:00:16.995 "seek_data": false, 01:00:16.995 "seek_hole": false, 01:00:16.995 "unmap": true, 01:00:16.995 "write": true, 01:00:16.995 "write_zeroes": true, 01:00:16.995 "zcopy": true, 01:00:16.995 "zone_append": false, 01:00:16.995 "zone_management": false 01:00:16.995 }, 01:00:16.995 "uuid": "c14b1253-eb35-433b-b35f-f296f991f12a", 01:00:16.995 "zoned": false 01:00:16.995 } 01:00:16.995 ]' 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:00:16.995 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 01:00:16.995 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 01:00:17.271 ************************************ 01:00:17.271 END TEST rpc_plugins 01:00:17.271 ************************************ 01:00:17.271 10:57:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 01:00:17.271 01:00:17.271 real 0m0.168s 01:00:17.271 user 0m0.113s 01:00:17.271 sys 0m0.020s 01:00:17.271 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:17.271 10:57:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:00:17.271 10:57:22 rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:17.271 10:57:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 01:00:17.271 10:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:17.271 10:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:17.271 10:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 01:00:17.271 ************************************ 01:00:17.271 START TEST rpc_trace_cmd_test 01:00:17.271 ************************************ 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 01:00:17.271 "bdev": { 01:00:17.271 "mask": "0x8", 01:00:17.271 "tpoint_mask": "0xffffffffffffffff" 01:00:17.271 }, 01:00:17.271 "bdev_nvme": { 01:00:17.271 "mask": "0x4000", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "blobfs": { 01:00:17.271 "mask": "0x80", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "dsa": { 01:00:17.271 "mask": "0x200", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "ftl": { 01:00:17.271 "mask": "0x40", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "iaa": { 01:00:17.271 "mask": "0x1000", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "iscsi_conn": { 01:00:17.271 "mask": "0x2", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "nvme_pcie": { 01:00:17.271 "mask": "0x800", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "nvme_tcp": { 01:00:17.271 "mask": "0x2000", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "nvmf_rdma": { 01:00:17.271 "mask": "0x10", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "nvmf_tcp": { 01:00:17.271 "mask": "0x20", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "scsi": { 01:00:17.271 "mask": "0x4", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "sock": { 01:00:17.271 "mask": "0x8000", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "thread": { 01:00:17.271 "mask": "0x400", 01:00:17.271 "tpoint_mask": "0x0" 01:00:17.271 }, 01:00:17.271 "tpoint_group_mask": "0x8", 01:00:17.271 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73282" 01:00:17.271 }' 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 01:00:17.271 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 01:00:17.529 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 01:00:17.529 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 01:00:17.529 10:57:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 01:00:17.529 01:00:17.529 real 0m0.274s 01:00:17.529 user 0m0.237s 01:00:17.529 sys 0m0.026s 01:00:17.529 10:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:17.529 10:57:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:00:17.529 ************************************ 01:00:17.529 END TEST rpc_trace_cmd_test 01:00:17.529 ************************************ 01:00:17.529 10:57:22 rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:17.529 10:57:22 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 01:00:17.529 10:57:22 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 01:00:17.529 10:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:17.529 10:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:17.529 10:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 01:00:17.529 ************************************ 01:00:17.529 START TEST go_rpc 01:00:17.529 ************************************ 01:00:17.529 10:57:22 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 01:00:17.529 10:57:22 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:17.529 10:57:22 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:17.529 10:57:22 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["9e07fe32-e13f-4477-96b7-6d896bed6412"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"9e07fe32-e13f-4477-96b7-6d896bed6412","zoned":false}]' 01:00:17.529 10:57:22 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 01:00:17.788 10:57:22 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 01:00:17.788 10:57:22 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 01:00:17.788 10:57:22 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:17.788 10:57:22 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:17.788 10:57:22 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:17.788 10:57:22 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 01:00:17.788 10:57:22 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 01:00:17.788 10:57:22 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 01:00:17.788 10:57:22 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 01:00:17.788 01:00:17.788 real 0m0.243s 01:00:17.788 user 0m0.170s 01:00:17.788 sys 0m0.038s 01:00:17.788 10:57:22 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:17.788 10:57:22 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:17.788 ************************************ 01:00:17.788 END TEST go_rpc 01:00:17.788 ************************************ 01:00:17.788 10:57:22 rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:17.788 10:57:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 01:00:17.788 10:57:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 01:00:17.788 10:57:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:17.788 10:57:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:17.788 10:57:22 rpc -- common/autotest_common.sh@10 -- # set +x 01:00:17.788 ************************************ 01:00:17.788 START TEST rpc_daemon_integrity 01:00:17.788 ************************************ 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:00:17.788 { 01:00:17.788 "aliases": [ 01:00:17.788 "05a32676-09e5-4e2a-81d0-441b4035286a" 01:00:17.788 ], 01:00:17.788 "assigned_rate_limits": { 01:00:17.788 "r_mbytes_per_sec": 0, 01:00:17.788 "rw_ios_per_sec": 0, 01:00:17.788 "rw_mbytes_per_sec": 0, 01:00:17.788 "w_mbytes_per_sec": 0 01:00:17.788 }, 01:00:17.788 "block_size": 512, 01:00:17.788 "claimed": false, 01:00:17.788 "driver_specific": {}, 01:00:17.788 "memory_domains": [ 01:00:17.788 { 01:00:17.788 "dma_device_id": "system", 01:00:17.788 "dma_device_type": 1 01:00:17.788 }, 01:00:17.788 { 01:00:17.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:00:17.788 "dma_device_type": 2 01:00:17.788 } 01:00:17.788 ], 01:00:17.788 "name": "Malloc3", 01:00:17.788 "num_blocks": 16384, 01:00:17.788 "product_name": "Malloc disk", 01:00:17.788 "supported_io_types": { 01:00:17.788 "abort": true, 01:00:17.788 "compare": false, 01:00:17.788 "compare_and_write": false, 01:00:17.788 "copy": true, 01:00:17.788 "flush": true, 01:00:17.788 "get_zone_info": false, 01:00:17.788 "nvme_admin": false, 01:00:17.788 "nvme_io": false, 01:00:17.788 "nvme_io_md": false, 01:00:17.788 "nvme_iov_md": false, 01:00:17.788 "read": true, 01:00:17.788 "reset": true, 01:00:17.788 "seek_data": false, 01:00:17.788 "seek_hole": false, 01:00:17.788 "unmap": true, 01:00:17.788 "write": true, 01:00:17.788 "write_zeroes": true, 01:00:17.788 "zcopy": true, 01:00:17.788 "zone_append": false, 01:00:17.788 "zone_management": false 01:00:17.788 }, 01:00:17.788 "uuid": "05a32676-09e5-4e2a-81d0-441b4035286a", 01:00:17.788 "zoned": false 01:00:17.788 } 01:00:17.788 ]' 01:00:17.788 10:57:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:18.046 [2024-07-22 10:57:23.043831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 01:00:18.046 [2024-07-22 10:57:23.043883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 01:00:18.046 [2024-07-22 10:57:23.043901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12dbb50 01:00:18.046 [2024-07-22 10:57:23.043910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 01:00:18.046 [2024-07-22 10:57:23.045199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:00:18.046 [2024-07-22 10:57:23.045232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:00:18.046 Passthru0 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:18.046 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:00:18.046 { 01:00:18.046 "aliases": [ 01:00:18.046 "05a32676-09e5-4e2a-81d0-441b4035286a" 01:00:18.046 ], 01:00:18.046 "assigned_rate_limits": { 01:00:18.046 "r_mbytes_per_sec": 0, 01:00:18.046 "rw_ios_per_sec": 0, 01:00:18.046 "rw_mbytes_per_sec": 0, 01:00:18.046 "w_mbytes_per_sec": 0 01:00:18.046 }, 01:00:18.046 "block_size": 512, 01:00:18.046 "claim_type": "exclusive_write", 01:00:18.046 "claimed": true, 01:00:18.046 "driver_specific": {}, 01:00:18.046 "memory_domains": [ 01:00:18.046 { 01:00:18.046 "dma_device_id": "system", 01:00:18.046 "dma_device_type": 1 01:00:18.046 }, 01:00:18.046 { 01:00:18.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:00:18.046 "dma_device_type": 2 01:00:18.046 } 01:00:18.046 ], 01:00:18.046 "name": "Malloc3", 01:00:18.046 "num_blocks": 16384, 01:00:18.046 "product_name": "Malloc disk", 01:00:18.046 "supported_io_types": { 01:00:18.046 "abort": true, 01:00:18.046 "compare": false, 01:00:18.046 "compare_and_write": false, 01:00:18.046 "copy": true, 01:00:18.046 "flush": true, 01:00:18.046 "get_zone_info": false, 01:00:18.046 "nvme_admin": false, 01:00:18.046 "nvme_io": false, 01:00:18.046 "nvme_io_md": false, 01:00:18.046 "nvme_iov_md": false, 01:00:18.046 "read": true, 01:00:18.046 "reset": true, 01:00:18.046 "seek_data": false, 01:00:18.046 "seek_hole": false, 01:00:18.046 "unmap": true, 01:00:18.046 "write": true, 01:00:18.046 "write_zeroes": true, 01:00:18.046 "zcopy": true, 01:00:18.046 "zone_append": false, 01:00:18.046 "zone_management": false 01:00:18.046 }, 01:00:18.046 "uuid": "05a32676-09e5-4e2a-81d0-441b4035286a", 01:00:18.046 "zoned": false 01:00:18.046 }, 01:00:18.046 { 01:00:18.046 "aliases": [ 01:00:18.046 "b115fa9f-a573-574d-9573-693888c9f91d" 01:00:18.046 ], 01:00:18.046 "assigned_rate_limits": { 01:00:18.046 "r_mbytes_per_sec": 0, 01:00:18.046 "rw_ios_per_sec": 0, 01:00:18.046 "rw_mbytes_per_sec": 0, 01:00:18.046 "w_mbytes_per_sec": 0 01:00:18.046 }, 01:00:18.046 "block_size": 512, 01:00:18.046 "claimed": false, 01:00:18.046 "driver_specific": { 01:00:18.046 "passthru": { 01:00:18.046 "base_bdev_name": "Malloc3", 01:00:18.046 "name": "Passthru0" 01:00:18.046 } 01:00:18.046 }, 01:00:18.046 "memory_domains": [ 01:00:18.046 { 01:00:18.046 "dma_device_id": "system", 01:00:18.046 "dma_device_type": 1 01:00:18.046 }, 01:00:18.046 { 01:00:18.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:00:18.047 "dma_device_type": 2 01:00:18.047 } 01:00:18.047 ], 01:00:18.047 "name": "Passthru0", 01:00:18.047 "num_blocks": 16384, 01:00:18.047 "product_name": "passthru", 01:00:18.047 "supported_io_types": { 01:00:18.047 "abort": true, 01:00:18.047 "compare": false, 01:00:18.047 "compare_and_write": false, 01:00:18.047 "copy": true, 01:00:18.047 "flush": true, 01:00:18.047 "get_zone_info": false, 01:00:18.047 "nvme_admin": false, 01:00:18.047 "nvme_io": false, 01:00:18.047 "nvme_io_md": false, 01:00:18.047 "nvme_iov_md": false, 01:00:18.047 "read": true, 01:00:18.047 "reset": true, 01:00:18.047 "seek_data": false, 01:00:18.047 "seek_hole": false, 01:00:18.047 "unmap": true, 01:00:18.047 "write": true, 01:00:18.047 "write_zeroes": true, 01:00:18.047 "zcopy": true, 01:00:18.047 "zone_append": false, 01:00:18.047 "zone_management": false 01:00:18.047 }, 01:00:18.047 "uuid": "b115fa9f-a573-574d-9573-693888c9f91d", 01:00:18.047 "zoned": false 01:00:18.047 } 01:00:18.047 ]' 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:00:18.047 01:00:18.047 real 0m0.339s 01:00:18.047 user 0m0.235s 01:00:18.047 sys 0m0.031s 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:18.047 10:57:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:00:18.047 ************************************ 01:00:18.047 END TEST rpc_daemon_integrity 01:00:18.047 ************************************ 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:18.305 10:57:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:00:18.305 10:57:23 rpc -- rpc/rpc.sh@84 -- # killprocess 73282 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 73282 ']' 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@952 -- # kill -0 73282 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@953 -- # uname 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73282 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:18.305 killing process with pid 73282 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73282' 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@967 -- # kill 73282 01:00:18.305 10:57:23 rpc -- common/autotest_common.sh@972 -- # wait 73282 01:00:18.563 01:00:18.563 real 0m3.297s 01:00:18.563 user 0m4.389s 01:00:18.563 sys 0m0.774s 01:00:18.563 10:57:23 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:18.563 10:57:23 rpc -- common/autotest_common.sh@10 -- # set +x 01:00:18.563 ************************************ 01:00:18.563 END TEST rpc 01:00:18.563 ************************************ 01:00:18.821 10:57:23 -- common/autotest_common.sh@1142 -- # return 0 01:00:18.821 10:57:23 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:00:18.821 10:57:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:18.821 10:57:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:18.821 10:57:23 -- common/autotest_common.sh@10 -- # set +x 01:00:18.821 ************************************ 01:00:18.821 START TEST skip_rpc 01:00:18.821 ************************************ 01:00:18.821 10:57:23 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:00:18.821 * Looking for test storage... 01:00:18.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:00:18.821 10:57:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:00:18.821 10:57:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:00:18.821 10:57:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 01:00:18.821 10:57:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:18.821 10:57:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:18.821 10:57:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:18.821 ************************************ 01:00:18.821 START TEST skip_rpc 01:00:18.821 ************************************ 01:00:18.821 10:57:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 01:00:18.821 10:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=73543 01:00:18.821 10:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:00:18.821 10:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 01:00:18.821 10:57:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 01:00:18.821 [2024-07-22 10:57:23.980889] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:18.821 [2024-07-22 10:57:23.981041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73543 ] 01:00:19.080 [2024-07-22 10:57:24.126002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:19.080 [2024-07-22 10:57:24.225257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:24.346 2024/07/22 10:57:28 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 73543 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 73543 ']' 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 73543 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73543 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:24.346 killing process with pid 73543 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73543' 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 73543 01:00:24.346 10:57:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 73543 01:00:24.346 01:00:24.346 real 0m5.460s 01:00:24.346 user 0m5.037s 01:00:24.346 sys 0m0.324s 01:00:24.346 10:57:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:24.346 10:57:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:24.346 ************************************ 01:00:24.346 END TEST skip_rpc 01:00:24.346 ************************************ 01:00:24.346 10:57:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:24.346 10:57:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 01:00:24.346 10:57:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:24.346 10:57:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:24.346 10:57:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:24.346 ************************************ 01:00:24.346 START TEST skip_rpc_with_json 01:00:24.346 ************************************ 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73630 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 73630 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 73630 ']' 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:24.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:24.346 10:57:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:00:24.346 [2024-07-22 10:57:29.472478] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:24.346 [2024-07-22 10:57:29.472594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73630 ] 01:00:24.605 [2024-07-22 10:57:29.608904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:24.605 [2024-07-22 10:57:29.703081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:00:25.541 [2024-07-22 10:57:30.492870] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 01:00:25.541 2024/07/22 10:57:30 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 01:00:25.541 request: 01:00:25.541 { 01:00:25.541 "method": "nvmf_get_transports", 01:00:25.541 "params": { 01:00:25.541 "trtype": "tcp" 01:00:25.541 } 01:00:25.541 } 01:00:25.541 Got JSON-RPC error response 01:00:25.541 GoRPCClient: error on JSON-RPC call 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:00:25.541 [2024-07-22 10:57:30.504936] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:25.541 10:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:00:25.541 { 01:00:25.541 "subsystems": [ 01:00:25.541 { 01:00:25.541 "subsystem": "keyring", 01:00:25.541 "config": [] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "iobuf", 01:00:25.541 "config": [ 01:00:25.541 { 01:00:25.541 "method": "iobuf_set_options", 01:00:25.541 "params": { 01:00:25.541 "large_bufsize": 135168, 01:00:25.541 "large_pool_count": 1024, 01:00:25.541 "small_bufsize": 8192, 01:00:25.541 "small_pool_count": 8192 01:00:25.541 } 01:00:25.541 } 01:00:25.541 ] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "sock", 01:00:25.541 "config": [ 01:00:25.541 { 01:00:25.541 "method": "sock_set_default_impl", 01:00:25.541 "params": { 01:00:25.541 "impl_name": "posix" 01:00:25.541 } 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "method": "sock_impl_set_options", 01:00:25.541 "params": { 01:00:25.541 "enable_ktls": false, 01:00:25.541 "enable_placement_id": 0, 01:00:25.541 "enable_quickack": false, 01:00:25.541 "enable_recv_pipe": true, 01:00:25.541 "enable_zerocopy_send_client": false, 01:00:25.541 "enable_zerocopy_send_server": true, 01:00:25.541 "impl_name": "ssl", 01:00:25.541 "recv_buf_size": 4096, 01:00:25.541 "send_buf_size": 4096, 01:00:25.541 "tls_version": 0, 01:00:25.541 "zerocopy_threshold": 0 01:00:25.541 } 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "method": "sock_impl_set_options", 01:00:25.541 "params": { 01:00:25.541 "enable_ktls": false, 01:00:25.541 "enable_placement_id": 0, 01:00:25.541 "enable_quickack": false, 01:00:25.541 "enable_recv_pipe": true, 01:00:25.541 "enable_zerocopy_send_client": false, 01:00:25.541 "enable_zerocopy_send_server": true, 01:00:25.541 "impl_name": "posix", 01:00:25.541 "recv_buf_size": 2097152, 01:00:25.541 "send_buf_size": 2097152, 01:00:25.541 "tls_version": 0, 01:00:25.541 "zerocopy_threshold": 0 01:00:25.541 } 01:00:25.541 } 01:00:25.541 ] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "vmd", 01:00:25.541 "config": [] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "accel", 01:00:25.541 "config": [ 01:00:25.541 { 01:00:25.541 "method": "accel_set_options", 01:00:25.541 "params": { 01:00:25.541 "buf_count": 2048, 01:00:25.541 "large_cache_size": 16, 01:00:25.541 "sequence_count": 2048, 01:00:25.541 "small_cache_size": 128, 01:00:25.541 "task_count": 2048 01:00:25.541 } 01:00:25.541 } 01:00:25.541 ] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "bdev", 01:00:25.541 "config": [ 01:00:25.541 { 01:00:25.541 "method": "bdev_set_options", 01:00:25.541 "params": { 01:00:25.541 "bdev_auto_examine": true, 01:00:25.541 "bdev_io_cache_size": 256, 01:00:25.541 "bdev_io_pool_size": 65535, 01:00:25.541 "iobuf_large_cache_size": 16, 01:00:25.541 "iobuf_small_cache_size": 128 01:00:25.541 } 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "method": "bdev_raid_set_options", 01:00:25.541 "params": { 01:00:25.541 "process_max_bandwidth_mb_sec": 0, 01:00:25.541 "process_window_size_kb": 1024 01:00:25.541 } 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "method": "bdev_iscsi_set_options", 01:00:25.541 "params": { 01:00:25.541 "timeout_sec": 30 01:00:25.541 } 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "method": "bdev_nvme_set_options", 01:00:25.541 "params": { 01:00:25.541 "action_on_timeout": "none", 01:00:25.541 "allow_accel_sequence": false, 01:00:25.541 "arbitration_burst": 0, 01:00:25.541 "bdev_retry_count": 3, 01:00:25.541 "ctrlr_loss_timeout_sec": 0, 01:00:25.541 "delay_cmd_submit": true, 01:00:25.541 "dhchap_dhgroups": [ 01:00:25.541 "null", 01:00:25.541 "ffdhe2048", 01:00:25.541 "ffdhe3072", 01:00:25.541 "ffdhe4096", 01:00:25.541 "ffdhe6144", 01:00:25.541 "ffdhe8192" 01:00:25.541 ], 01:00:25.541 "dhchap_digests": [ 01:00:25.541 "sha256", 01:00:25.541 "sha384", 01:00:25.541 "sha512" 01:00:25.541 ], 01:00:25.541 "disable_auto_failback": false, 01:00:25.541 "fast_io_fail_timeout_sec": 0, 01:00:25.541 "generate_uuids": false, 01:00:25.541 "high_priority_weight": 0, 01:00:25.541 "io_path_stat": false, 01:00:25.541 "io_queue_requests": 0, 01:00:25.541 "keep_alive_timeout_ms": 10000, 01:00:25.541 "low_priority_weight": 0, 01:00:25.541 "medium_priority_weight": 0, 01:00:25.541 "nvme_adminq_poll_period_us": 10000, 01:00:25.541 "nvme_error_stat": false, 01:00:25.541 "nvme_ioq_poll_period_us": 0, 01:00:25.541 "rdma_cm_event_timeout_ms": 0, 01:00:25.541 "rdma_max_cq_size": 0, 01:00:25.541 "rdma_srq_size": 0, 01:00:25.541 "reconnect_delay_sec": 0, 01:00:25.541 "timeout_admin_us": 0, 01:00:25.541 "timeout_us": 0, 01:00:25.541 "transport_ack_timeout": 0, 01:00:25.541 "transport_retry_count": 4, 01:00:25.541 "transport_tos": 0 01:00:25.541 } 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "method": "bdev_nvme_set_hotplug", 01:00:25.541 "params": { 01:00:25.541 "enable": false, 01:00:25.541 "period_us": 100000 01:00:25.541 } 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "method": "bdev_wait_for_examine" 01:00:25.541 } 01:00:25.541 ] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "scsi", 01:00:25.541 "config": null 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "scheduler", 01:00:25.541 "config": [ 01:00:25.541 { 01:00:25.541 "method": "framework_set_scheduler", 01:00:25.541 "params": { 01:00:25.541 "name": "static" 01:00:25.541 } 01:00:25.541 } 01:00:25.541 ] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "vhost_scsi", 01:00:25.541 "config": [] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "vhost_blk", 01:00:25.541 "config": [] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "ublk", 01:00:25.541 "config": [] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "nbd", 01:00:25.541 "config": [] 01:00:25.541 }, 01:00:25.541 { 01:00:25.541 "subsystem": "nvmf", 01:00:25.542 "config": [ 01:00:25.542 { 01:00:25.542 "method": "nvmf_set_config", 01:00:25.542 "params": { 01:00:25.542 "admin_cmd_passthru": { 01:00:25.542 "identify_ctrlr": false 01:00:25.542 }, 01:00:25.542 "discovery_filter": "match_any" 01:00:25.542 } 01:00:25.542 }, 01:00:25.542 { 01:00:25.542 "method": "nvmf_set_max_subsystems", 01:00:25.542 "params": { 01:00:25.542 "max_subsystems": 1024 01:00:25.542 } 01:00:25.542 }, 01:00:25.542 { 01:00:25.542 "method": "nvmf_set_crdt", 01:00:25.542 "params": { 01:00:25.542 "crdt1": 0, 01:00:25.542 "crdt2": 0, 01:00:25.542 "crdt3": 0 01:00:25.542 } 01:00:25.542 }, 01:00:25.542 { 01:00:25.542 "method": "nvmf_create_transport", 01:00:25.542 "params": { 01:00:25.542 "abort_timeout_sec": 1, 01:00:25.542 "ack_timeout": 0, 01:00:25.542 "buf_cache_size": 4294967295, 01:00:25.542 "c2h_success": true, 01:00:25.542 "data_wr_pool_size": 0, 01:00:25.542 "dif_insert_or_strip": false, 01:00:25.542 "in_capsule_data_size": 4096, 01:00:25.542 "io_unit_size": 131072, 01:00:25.542 "max_aq_depth": 128, 01:00:25.542 "max_io_qpairs_per_ctrlr": 127, 01:00:25.542 "max_io_size": 131072, 01:00:25.542 "max_queue_depth": 128, 01:00:25.542 "num_shared_buffers": 511, 01:00:25.542 "sock_priority": 0, 01:00:25.542 "trtype": "TCP", 01:00:25.542 "zcopy": false 01:00:25.542 } 01:00:25.542 } 01:00:25.542 ] 01:00:25.542 }, 01:00:25.542 { 01:00:25.542 "subsystem": "iscsi", 01:00:25.542 "config": [ 01:00:25.542 { 01:00:25.542 "method": "iscsi_set_options", 01:00:25.542 "params": { 01:00:25.542 "allow_duplicated_isid": false, 01:00:25.542 "chap_group": 0, 01:00:25.542 "data_out_pool_size": 2048, 01:00:25.542 "default_time2retain": 20, 01:00:25.542 "default_time2wait": 2, 01:00:25.542 "disable_chap": false, 01:00:25.542 "error_recovery_level": 0, 01:00:25.542 "first_burst_length": 8192, 01:00:25.542 "immediate_data": true, 01:00:25.542 "immediate_data_pool_size": 16384, 01:00:25.542 "max_connections_per_session": 2, 01:00:25.542 "max_large_datain_per_connection": 64, 01:00:25.542 "max_queue_depth": 64, 01:00:25.542 "max_r2t_per_connection": 4, 01:00:25.542 "max_sessions": 128, 01:00:25.542 "mutual_chap": false, 01:00:25.542 "node_base": "iqn.2016-06.io.spdk", 01:00:25.542 "nop_in_interval": 30, 01:00:25.542 "nop_timeout": 60, 01:00:25.542 "pdu_pool_size": 36864, 01:00:25.542 "require_chap": false 01:00:25.542 } 01:00:25.542 } 01:00:25.542 ] 01:00:25.542 } 01:00:25.542 ] 01:00:25.542 } 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 73630 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 73630 ']' 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 73630 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73630 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:25.542 killing process with pid 73630 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73630' 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 73630 01:00:25.542 10:57:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 73630 01:00:26.109 10:57:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:00:26.109 10:57:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73675 01:00:26.109 10:57:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 73675 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 73675 ']' 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 73675 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73675 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:31.373 killing process with pid 73675 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73675' 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 73675 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 73675 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:00:31.373 01:00:31.373 real 0m7.105s 01:00:31.373 user 0m6.842s 01:00:31.373 sys 0m0.685s 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:00:31.373 ************************************ 01:00:31.373 END TEST skip_rpc_with_json 01:00:31.373 ************************************ 01:00:31.373 10:57:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:31.373 10:57:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 01:00:31.373 10:57:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:31.373 10:57:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:31.373 10:57:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:31.373 ************************************ 01:00:31.373 START TEST skip_rpc_with_delay 01:00:31.373 ************************************ 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:00:31.373 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:00:31.630 [2024-07-22 10:57:36.637722] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 01:00:31.630 [2024-07-22 10:57:36.637870] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 01:00:31.630 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 01:00:31.630 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:31.630 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:00:31.630 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:31.630 01:00:31.630 real 0m0.094s 01:00:31.630 user 0m0.065s 01:00:31.630 sys 0m0.029s 01:00:31.630 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:31.630 10:57:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 01:00:31.630 ************************************ 01:00:31.630 END TEST skip_rpc_with_delay 01:00:31.630 ************************************ 01:00:31.630 10:57:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:31.630 10:57:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 01:00:31.630 10:57:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 01:00:31.630 10:57:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 01:00:31.630 10:57:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:31.630 10:57:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:31.630 10:57:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:31.630 ************************************ 01:00:31.630 START TEST exit_on_failed_rpc_init 01:00:31.630 ************************************ 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73779 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 73779 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 73779 ']' 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:31.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:31.630 10:57:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:00:31.630 [2024-07-22 10:57:36.785949] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:31.630 [2024-07-22 10:57:36.786074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73779 ] 01:00:31.887 [2024-07-22 10:57:36.928252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:31.887 [2024-07-22 10:57:37.027027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:32.817 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:32.817 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 01:00:32.817 10:57:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:00:32.817 10:57:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:00:32.818 10:57:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:00:32.818 [2024-07-22 10:57:37.891836] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:32.818 [2024-07-22 10:57:37.891936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73809 ] 01:00:33.075 [2024-07-22 10:57:38.030598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:33.075 [2024-07-22 10:57:38.127556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:33.075 [2024-07-22 10:57:38.127649] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 01:00:33.075 [2024-07-22 10:57:38.127699] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 01:00:33.075 [2024-07-22 10:57:38.127707] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 73779 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 73779 ']' 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 73779 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73779 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:33.075 killing process with pid 73779 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73779' 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 73779 01:00:33.075 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 73779 01:00:33.640 01:00:33.640 real 0m1.912s 01:00:33.640 user 0m2.277s 01:00:33.640 sys 0m0.436s 01:00:33.640 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:33.640 10:57:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:00:33.640 ************************************ 01:00:33.640 END TEST exit_on_failed_rpc_init 01:00:33.640 ************************************ 01:00:33.640 10:57:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 01:00:33.640 10:57:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:00:33.640 01:00:33.640 real 0m14.877s 01:00:33.640 user 0m14.329s 01:00:33.640 sys 0m1.657s 01:00:33.640 10:57:38 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:33.640 10:57:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:33.640 ************************************ 01:00:33.640 END TEST skip_rpc 01:00:33.640 ************************************ 01:00:33.640 10:57:38 -- common/autotest_common.sh@1142 -- # return 0 01:00:33.640 10:57:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:00:33.640 10:57:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:33.640 10:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:33.640 10:57:38 -- common/autotest_common.sh@10 -- # set +x 01:00:33.640 ************************************ 01:00:33.640 START TEST rpc_client 01:00:33.640 ************************************ 01:00:33.640 10:57:38 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:00:33.640 * Looking for test storage... 01:00:33.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 01:00:33.640 10:57:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 01:00:33.640 OK 01:00:33.640 10:57:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 01:00:33.641 01:00:33.641 real 0m0.101s 01:00:33.641 user 0m0.047s 01:00:33.641 sys 0m0.062s 01:00:33.641 10:57:38 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:33.641 10:57:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 01:00:33.641 ************************************ 01:00:33.641 END TEST rpc_client 01:00:33.641 ************************************ 01:00:33.899 10:57:38 -- common/autotest_common.sh@1142 -- # return 0 01:00:33.899 10:57:38 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:00:33.899 10:57:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:33.899 10:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:33.899 10:57:38 -- common/autotest_common.sh@10 -- # set +x 01:00:33.899 ************************************ 01:00:33.899 START TEST json_config 01:00:33.899 ************************************ 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@7 -- # uname -s 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:33.899 10:57:38 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:33.899 10:57:38 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:33.899 10:57:38 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:33.899 10:57:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.899 10:57:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.899 10:57:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.899 10:57:38 json_config -- paths/export.sh@5 -- # export PATH 01:00:33.899 10:57:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@47 -- # : 0 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:00:33.899 10:57:38 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:00:33.899 INFO: JSON configuration test init 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:33.899 10:57:38 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 01:00:33.899 10:57:38 json_config -- json_config/common.sh@9 -- # local app=target 01:00:33.899 10:57:38 json_config -- json_config/common.sh@10 -- # shift 01:00:33.899 10:57:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:00:33.899 10:57:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:00:33.899 10:57:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:00:33.899 10:57:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:00:33.899 10:57:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:00:33.899 10:57:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 01:00:33.899 10:57:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73937 01:00:33.899 Waiting for target to run... 01:00:33.899 10:57:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:00:33.899 10:57:38 json_config -- json_config/common.sh@25 -- # waitforlisten 73937 /var/tmp/spdk_tgt.sock 01:00:33.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@829 -- # '[' -z 73937 ']' 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:33.899 10:57:38 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:33.899 [2024-07-22 10:57:39.036447] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:33.899 [2024-07-22 10:57:39.037304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73937 ] 01:00:34.464 [2024-07-22 10:57:39.474864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:34.464 [2024-07-22 10:57:39.551233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:35.029 10:57:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:35.029 10:57:40 json_config -- common/autotest_common.sh@862 -- # return 0 01:00:35.029 01:00:35.029 10:57:40 json_config -- json_config/common.sh@26 -- # echo '' 01:00:35.029 10:57:40 json_config -- json_config/json_config.sh@273 -- # create_accel_config 01:00:35.029 10:57:40 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 01:00:35.029 10:57:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:35.029 10:57:40 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:35.029 10:57:40 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 01:00:35.029 10:57:40 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 01:00:35.029 10:57:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:35.029 10:57:40 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:35.029 10:57:40 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 01:00:35.029 10:57:40 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 01:00:35.029 10:57:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 01:00:35.593 10:57:40 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 01:00:35.593 10:57:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 01:00:35.593 10:57:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:35.593 10:57:40 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:35.593 10:57:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 01:00:35.593 10:57:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 01:00:35.593 10:57:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 01:00:35.593 10:57:40 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 01:00:35.593 10:57:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 01:00:35.593 10:57:40 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@48 -- # local get_types 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@50 -- # local type_diff 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@51 -- # sort 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@51 -- # uniq -u 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@51 -- # type_diff= 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 01:00:35.851 10:57:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:35.851 10:57:40 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@59 -- # return 0 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 01:00:35.851 10:57:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:35.851 10:57:40 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 01:00:35.851 10:57:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 01:00:35.851 10:57:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 01:00:36.108 MallocForNvmf0 01:00:36.108 10:57:41 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 01:00:36.108 10:57:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 01:00:36.366 MallocForNvmf1 01:00:36.366 10:57:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 01:00:36.366 10:57:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 01:00:36.624 [2024-07-22 10:57:41.705047] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:36.624 10:57:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:00:36.624 10:57:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:00:36.882 10:57:42 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:00:36.882 10:57:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:00:37.140 10:57:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:00:37.140 10:57:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:00:37.398 10:57:42 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:00:37.398 10:57:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:00:37.654 [2024-07-22 10:57:42.825856] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:00:37.654 10:57:42 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 01:00:37.654 10:57:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:37.654 10:57:42 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:37.911 10:57:42 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 01:00:37.911 10:57:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:37.911 10:57:42 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:37.911 10:57:42 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 01:00:37.911 10:57:42 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:00:37.911 10:57:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:00:38.170 MallocBdevForConfigChangeCheck 01:00:38.170 10:57:43 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 01:00:38.170 10:57:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:38.170 10:57:43 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:38.170 10:57:43 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 01:00:38.170 10:57:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:00:38.735 INFO: shutting down applications... 01:00:38.736 10:57:43 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 01:00:38.736 10:57:43 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 01:00:38.736 10:57:43 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 01:00:38.736 10:57:43 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 01:00:38.736 10:57:43 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 01:00:38.993 Calling clear_iscsi_subsystem 01:00:38.993 Calling clear_nvmf_subsystem 01:00:38.993 Calling clear_nbd_subsystem 01:00:38.993 Calling clear_ublk_subsystem 01:00:38.993 Calling clear_vhost_blk_subsystem 01:00:38.993 Calling clear_vhost_scsi_subsystem 01:00:38.993 Calling clear_bdev_subsystem 01:00:38.993 10:57:43 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 01:00:38.993 10:57:43 json_config -- json_config/json_config.sh@347 -- # count=100 01:00:38.993 10:57:43 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 01:00:38.993 10:57:43 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:00:38.993 10:57:43 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 01:00:38.993 10:57:43 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 01:00:39.250 10:57:44 json_config -- json_config/json_config.sh@349 -- # break 01:00:39.250 10:57:44 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 01:00:39.250 10:57:44 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 01:00:39.250 10:57:44 json_config -- json_config/common.sh@31 -- # local app=target 01:00:39.250 10:57:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:00:39.250 10:57:44 json_config -- json_config/common.sh@35 -- # [[ -n 73937 ]] 01:00:39.250 10:57:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 73937 01:00:39.250 10:57:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 01:00:39.250 10:57:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:00:39.250 10:57:44 json_config -- json_config/common.sh@41 -- # kill -0 73937 01:00:39.250 10:57:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 01:00:39.859 10:57:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 01:00:39.859 10:57:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:00:39.859 10:57:44 json_config -- json_config/common.sh@41 -- # kill -0 73937 01:00:39.859 10:57:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 01:00:39.859 10:57:44 json_config -- json_config/common.sh@43 -- # break 01:00:39.859 10:57:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 01:00:39.859 SPDK target shutdown done 01:00:39.859 10:57:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:00:39.859 INFO: relaunching applications... 01:00:39.859 10:57:44 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 01:00:39.859 10:57:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:00:39.859 10:57:44 json_config -- json_config/common.sh@9 -- # local app=target 01:00:39.859 10:57:44 json_config -- json_config/common.sh@10 -- # shift 01:00:39.859 10:57:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:00:39.859 10:57:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:00:39.859 10:57:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:00:39.859 10:57:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:00:39.859 10:57:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:00:39.859 10:57:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=74212 01:00:39.859 10:57:44 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:00:39.859 Waiting for target to run... 01:00:39.859 10:57:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:00:39.859 10:57:44 json_config -- json_config/common.sh@25 -- # waitforlisten 74212 /var/tmp/spdk_tgt.sock 01:00:39.859 10:57:44 json_config -- common/autotest_common.sh@829 -- # '[' -z 74212 ']' 01:00:39.859 10:57:44 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:00:39.859 10:57:44 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:39.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:00:39.859 10:57:44 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:00:39.859 10:57:44 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:39.859 10:57:44 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:39.859 [2024-07-22 10:57:44.994024] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:39.860 [2024-07-22 10:57:44.994137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74212 ] 01:00:40.425 [2024-07-22 10:57:45.422733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:40.425 [2024-07-22 10:57:45.499139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:40.683 [2024-07-22 10:57:45.822960] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:40.683 [2024-07-22 10:57:45.855012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:00:40.941 10:57:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:40.941 10:57:46 json_config -- common/autotest_common.sh@862 -- # return 0 01:00:40.941 01:00:40.941 10:57:46 json_config -- json_config/common.sh@26 -- # echo '' 01:00:40.941 10:57:46 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 01:00:40.941 INFO: Checking if target configuration is the same... 01:00:40.941 10:57:46 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 01:00:40.941 10:57:46 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:00:40.941 10:57:46 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 01:00:40.941 10:57:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:00:40.941 + '[' 2 -ne 2 ']' 01:00:40.941 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:00:40.941 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:00:40.941 + rootdir=/home/vagrant/spdk_repo/spdk 01:00:40.941 +++ basename /dev/fd/62 01:00:40.941 ++ mktemp /tmp/62.XXX 01:00:40.941 + tmp_file_1=/tmp/62.rWt 01:00:40.941 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:00:40.941 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:00:40.941 + tmp_file_2=/tmp/spdk_tgt_config.json.jCo 01:00:40.941 + ret=0 01:00:40.941 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:00:41.506 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:00:41.506 + diff -u /tmp/62.rWt /tmp/spdk_tgt_config.json.jCo 01:00:41.506 INFO: JSON config files are the same 01:00:41.506 + echo 'INFO: JSON config files are the same' 01:00:41.506 + rm /tmp/62.rWt /tmp/spdk_tgt_config.json.jCo 01:00:41.506 + exit 0 01:00:41.506 10:57:46 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 01:00:41.506 INFO: changing configuration and checking if this can be detected... 01:00:41.506 10:57:46 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 01:00:41.506 10:57:46 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 01:00:41.506 10:57:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 01:00:41.763 10:57:46 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:00:41.763 10:57:46 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 01:00:41.763 10:57:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:00:41.763 + '[' 2 -ne 2 ']' 01:00:41.763 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:00:41.763 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:00:41.763 + rootdir=/home/vagrant/spdk_repo/spdk 01:00:41.763 +++ basename /dev/fd/62 01:00:41.763 ++ mktemp /tmp/62.XXX 01:00:41.763 + tmp_file_1=/tmp/62.uj8 01:00:41.763 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:00:41.763 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:00:41.763 + tmp_file_2=/tmp/spdk_tgt_config.json.lrb 01:00:41.763 + ret=0 01:00:41.763 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:00:42.328 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:00:42.328 + diff -u /tmp/62.uj8 /tmp/spdk_tgt_config.json.lrb 01:00:42.328 + ret=1 01:00:42.328 + echo '=== Start of file: /tmp/62.uj8 ===' 01:00:42.328 + cat /tmp/62.uj8 01:00:42.328 + echo '=== End of file: /tmp/62.uj8 ===' 01:00:42.328 + echo '' 01:00:42.328 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lrb ===' 01:00:42.328 + cat /tmp/spdk_tgt_config.json.lrb 01:00:42.328 + echo '=== End of file: /tmp/spdk_tgt_config.json.lrb ===' 01:00:42.328 + echo '' 01:00:42.328 + rm /tmp/62.uj8 /tmp/spdk_tgt_config.json.lrb 01:00:42.328 + exit 1 01:00:42.328 INFO: configuration change detected. 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 01:00:42.328 10:57:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:42.328 10:57:47 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@311 -- # local ret=0 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@321 -- # [[ -n 74212 ]] 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 01:00:42.328 10:57:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 01:00:42.328 10:57:47 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@197 -- # uname -s 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 01:00:42.328 10:57:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:42.328 10:57:47 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:42.328 10:57:47 json_config -- json_config/json_config.sh@327 -- # killprocess 74212 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 74212 ']' 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@952 -- # kill -0 74212 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@953 -- # uname 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74212 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:42.329 killing process with pid 74212 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74212' 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@967 -- # kill 74212 01:00:42.329 10:57:47 json_config -- common/autotest_common.sh@972 -- # wait 74212 01:00:42.586 10:57:47 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:00:42.586 10:57:47 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 01:00:42.586 10:57:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 01:00:42.586 10:57:47 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:42.586 10:57:47 json_config -- json_config/json_config.sh@332 -- # return 0 01:00:42.586 INFO: Success 01:00:42.586 10:57:47 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 01:00:42.586 01:00:42.586 real 0m8.800s 01:00:42.586 user 0m12.701s 01:00:42.586 sys 0m1.976s 01:00:42.586 10:57:47 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:42.586 10:57:47 json_config -- common/autotest_common.sh@10 -- # set +x 01:00:42.586 ************************************ 01:00:42.586 END TEST json_config 01:00:42.586 ************************************ 01:00:42.586 10:57:47 -- common/autotest_common.sh@1142 -- # return 0 01:00:42.586 10:57:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:00:42.586 10:57:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:42.586 10:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:42.586 10:57:47 -- common/autotest_common.sh@10 -- # set +x 01:00:42.586 ************************************ 01:00:42.586 START TEST json_config_extra_key 01:00:42.586 ************************************ 01:00:42.586 10:57:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:00:42.586 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:42.586 10:57:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:42.586 10:57:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:42.586 10:57:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:42.586 10:57:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:42.586 10:57:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:42.586 10:57:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:42.586 10:57:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 01:00:42.586 10:57:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:42.586 10:57:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:42.844 10:57:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:42.844 10:57:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:00:42.844 10:57:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:00:42.844 10:57:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:00:42.844 INFO: launching applications... 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 01:00:42.844 10:57:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74388 01:00:42.844 Waiting for target to run... 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74388 /var/tmp/spdk_tgt.sock 01:00:42.844 10:57:47 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 74388 ']' 01:00:42.844 10:57:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:00:42.844 10:57:47 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:42.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:00:42.844 10:57:47 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:00:42.844 10:57:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:00:42.844 10:57:47 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:42.844 10:57:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:00:42.844 [2024-07-22 10:57:47.861657] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:42.844 [2024-07-22 10:57:47.861765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74388 ] 01:00:43.101 [2024-07-22 10:57:48.294188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:43.358 [2024-07-22 10:57:48.368903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:43.923 10:57:48 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:43.923 10:57:48 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 01:00:43.923 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 01:00:43.923 INFO: shutting down applications... 01:00:43.923 10:57:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 01:00:43.923 10:57:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74388 ]] 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74388 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74388 01:00:43.923 10:57:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:00:44.179 10:57:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:00:44.179 10:57:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:00:44.179 10:57:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74388 01:00:44.179 10:57:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 01:00:44.179 10:57:49 json_config_extra_key -- json_config/common.sh@43 -- # break 01:00:44.179 10:57:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 01:00:44.179 SPDK target shutdown done 01:00:44.179 10:57:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:00:44.179 Success 01:00:44.179 10:57:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 01:00:44.179 01:00:44.179 real 0m1.655s 01:00:44.179 user 0m1.606s 01:00:44.179 sys 0m0.443s 01:00:44.179 10:57:49 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:44.179 10:57:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:00:44.179 ************************************ 01:00:44.179 END TEST json_config_extra_key 01:00:44.179 ************************************ 01:00:44.448 10:57:49 -- common/autotest_common.sh@1142 -- # return 0 01:00:44.448 10:57:49 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:00:44.448 10:57:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:44.448 10:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:44.448 10:57:49 -- common/autotest_common.sh@10 -- # set +x 01:00:44.448 ************************************ 01:00:44.448 START TEST alias_rpc 01:00:44.448 ************************************ 01:00:44.448 10:57:49 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:00:44.448 * Looking for test storage... 01:00:44.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 01:00:44.448 10:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:00:44.448 10:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74470 01:00:44.448 10:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74470 01:00:44.448 10:57:49 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 74470 ']' 01:00:44.448 10:57:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:44.448 10:57:49 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:44.448 10:57:49 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:44.448 10:57:49 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:44.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:44.448 10:57:49 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:44.448 10:57:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:44.448 [2024-07-22 10:57:49.575221] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:44.448 [2024-07-22 10:57:49.575799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74470 ] 01:00:44.705 [2024-07-22 10:57:49.717345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:44.705 [2024-07-22 10:57:49.803214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:45.639 10:57:50 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:45.639 10:57:50 alias_rpc -- common/autotest_common.sh@862 -- # return 0 01:00:45.639 10:57:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 01:00:45.899 10:57:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74470 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 74470 ']' 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 74470 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@953 -- # uname 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74470 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:45.899 killing process with pid 74470 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74470' 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@967 -- # kill 74470 01:00:45.899 10:57:50 alias_rpc -- common/autotest_common.sh@972 -- # wait 74470 01:00:46.158 01:00:46.158 real 0m1.909s 01:00:46.158 user 0m2.175s 01:00:46.158 sys 0m0.485s 01:00:46.158 10:57:51 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:46.158 10:57:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:46.158 ************************************ 01:00:46.158 END TEST alias_rpc 01:00:46.158 ************************************ 01:00:46.417 10:57:51 -- common/autotest_common.sh@1142 -- # return 0 01:00:46.417 10:57:51 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 01:00:46.417 10:57:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:00:46.417 10:57:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:46.417 10:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:46.417 10:57:51 -- common/autotest_common.sh@10 -- # set +x 01:00:46.417 ************************************ 01:00:46.417 START TEST dpdk_mem_utility 01:00:46.417 ************************************ 01:00:46.417 10:57:51 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:00:46.417 * Looking for test storage... 01:00:46.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 01:00:46.417 10:57:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:00:46.417 10:57:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74562 01:00:46.417 10:57:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74562 01:00:46.417 10:57:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:00:46.417 10:57:51 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 74562 ']' 01:00:46.417 10:57:51 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:46.417 10:57:51 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:46.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:46.417 10:57:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:46.417 10:57:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:46.417 10:57:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:00:46.417 [2024-07-22 10:57:51.539412] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:46.417 [2024-07-22 10:57:51.539511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74562 ] 01:00:46.676 [2024-07-22 10:57:51.682353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:46.676 [2024-07-22 10:57:51.767331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:47.612 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:47.613 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 01:00:47.613 10:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 01:00:47.613 10:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 01:00:47.613 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:47.613 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:00:47.613 { 01:00:47.613 "filename": "/tmp/spdk_mem_dump.txt" 01:00:47.613 } 01:00:47.613 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:47.613 10:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:00:47.613 DPDK memory size 814.000000 MiB in 1 heap(s) 01:00:47.613 1 heaps totaling size 814.000000 MiB 01:00:47.613 size: 814.000000 MiB heap id: 0 01:00:47.613 end heaps---------- 01:00:47.613 8 mempools totaling size 598.116089 MiB 01:00:47.613 size: 212.674988 MiB name: PDU_immediate_data_Pool 01:00:47.613 size: 158.602051 MiB name: PDU_data_out_Pool 01:00:47.613 size: 84.521057 MiB name: bdev_io_74562 01:00:47.613 size: 51.011292 MiB name: evtpool_74562 01:00:47.613 size: 50.003479 MiB name: msgpool_74562 01:00:47.613 size: 21.763794 MiB name: PDU_Pool 01:00:47.613 size: 19.513306 MiB name: SCSI_TASK_Pool 01:00:47.613 size: 0.026123 MiB name: Session_Pool 01:00:47.613 end mempools------- 01:00:47.613 6 memzones totaling size 4.142822 MiB 01:00:47.613 size: 1.000366 MiB name: RG_ring_0_74562 01:00:47.613 size: 1.000366 MiB name: RG_ring_1_74562 01:00:47.613 size: 1.000366 MiB name: RG_ring_4_74562 01:00:47.613 size: 1.000366 MiB name: RG_ring_5_74562 01:00:47.613 size: 0.125366 MiB name: RG_ring_2_74562 01:00:47.613 size: 0.015991 MiB name: RG_ring_3_74562 01:00:47.613 end memzones------- 01:00:47.613 10:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 01:00:47.613 heap id: 0 total size: 814.000000 MiB number of busy elements: 218 number of free elements: 15 01:00:47.613 list of free elements. size: 12.486938 MiB 01:00:47.613 element at address: 0x200000400000 with size: 1.999512 MiB 01:00:47.613 element at address: 0x200018e00000 with size: 0.999878 MiB 01:00:47.613 element at address: 0x200019000000 with size: 0.999878 MiB 01:00:47.613 element at address: 0x200003e00000 with size: 0.996277 MiB 01:00:47.613 element at address: 0x200031c00000 with size: 0.994446 MiB 01:00:47.613 element at address: 0x200013800000 with size: 0.978699 MiB 01:00:47.613 element at address: 0x200007000000 with size: 0.959839 MiB 01:00:47.613 element at address: 0x200019200000 with size: 0.936584 MiB 01:00:47.613 element at address: 0x200000200000 with size: 0.837036 MiB 01:00:47.613 element at address: 0x20001aa00000 with size: 0.572083 MiB 01:00:47.613 element at address: 0x20000b200000 with size: 0.489990 MiB 01:00:47.613 element at address: 0x200000800000 with size: 0.487061 MiB 01:00:47.613 element at address: 0x200019400000 with size: 0.485657 MiB 01:00:47.613 element at address: 0x200027e00000 with size: 0.398315 MiB 01:00:47.613 element at address: 0x200003a00000 with size: 0.351685 MiB 01:00:47.613 list of standard malloc elements. size: 199.250488 MiB 01:00:47.613 element at address: 0x20000b3fff80 with size: 132.000122 MiB 01:00:47.613 element at address: 0x2000071fff80 with size: 64.000122 MiB 01:00:47.613 element at address: 0x200018efff80 with size: 1.000122 MiB 01:00:47.613 element at address: 0x2000190fff80 with size: 1.000122 MiB 01:00:47.613 element at address: 0x2000192fff80 with size: 1.000122 MiB 01:00:47.613 element at address: 0x2000003d9f00 with size: 0.140747 MiB 01:00:47.613 element at address: 0x2000192eff00 with size: 0.062622 MiB 01:00:47.613 element at address: 0x2000003fdf80 with size: 0.007935 MiB 01:00:47.613 element at address: 0x2000192efdc0 with size: 0.000305 MiB 01:00:47.613 element at address: 0x2000002d6480 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6540 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6600 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d66c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6780 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6840 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6900 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d69c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6a80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6b40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6c00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6d80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6e40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6f00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d71c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7280 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7340 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7400 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d74c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7580 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7640 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7700 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d77c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7880 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7940 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7a00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7b80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000002d7c40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000003d9e40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000087cb00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000087cbc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000087cc80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000087cd40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000087ce00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000087cec0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000008fd180 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a080 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a140 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a200 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a380 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a440 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a500 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a680 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a740 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a800 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5a980 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5aa40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5ab00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5abc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5ac80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5ad40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5ae00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5aec0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5af80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003a5b040 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003adb300 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003adb500 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003adf7c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003affa80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003affb40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200003eff0c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000070fdd80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000b27d700 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000b27d880 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000b27d940 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000b27da00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000b27dac0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000192efc40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000192efd00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x2000194bc740 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92740 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92800 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa928c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92980 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92a40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92b00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92c80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92d40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92e00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa92f80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93040 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93100 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa931c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93280 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93340 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93400 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa934c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93580 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93640 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93700 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa937c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93880 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93940 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93a00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93b80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93c40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93d00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93e80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa93f40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94000 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa940c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94180 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94240 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94300 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa943c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94480 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94540 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94600 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa946c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94780 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94840 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94900 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa949c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94a80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94b40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94c00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94d80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94e40 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94f00 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa95080 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa95140 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa95200 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa952c0 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa95380 with size: 0.000183 MiB 01:00:47.613 element at address: 0x20001aa95440 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200027e65f80 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200027e66040 with size: 0.000183 MiB 01:00:47.613 element at address: 0x200027e6cc40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6ce40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6cf00 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d080 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d140 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d200 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d380 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d440 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d500 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d680 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d740 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d800 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6d980 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6da40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6db00 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6dc80 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6dd40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6de00 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6dec0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6df80 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e040 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e100 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e280 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e340 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e400 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e580 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e640 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e700 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e880 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6e940 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6ea00 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6eac0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6eb80 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6ec40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6ed00 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6edc0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6ee80 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6ef40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f000 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f180 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f240 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f300 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f480 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f540 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f600 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f780 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f840 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f900 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6fa80 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6fb40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6fc00 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6fd80 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6fe40 with size: 0.000183 MiB 01:00:47.614 element at address: 0x200027e6ff00 with size: 0.000183 MiB 01:00:47.614 list of memzone associated elements. size: 602.262573 MiB 01:00:47.614 element at address: 0x20001aa95500 with size: 211.416748 MiB 01:00:47.614 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 01:00:47.614 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 01:00:47.614 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 01:00:47.614 element at address: 0x2000139fab80 with size: 84.020630 MiB 01:00:47.614 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74562_0 01:00:47.614 element at address: 0x2000009ff380 with size: 48.003052 MiB 01:00:47.614 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74562_0 01:00:47.614 element at address: 0x200003fff380 with size: 48.003052 MiB 01:00:47.614 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74562_0 01:00:47.614 element at address: 0x2000195be940 with size: 20.255554 MiB 01:00:47.614 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 01:00:47.614 element at address: 0x200031dfeb40 with size: 18.005066 MiB 01:00:47.614 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 01:00:47.614 element at address: 0x2000005ffe00 with size: 2.000488 MiB 01:00:47.614 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74562 01:00:47.614 element at address: 0x200003bffe00 with size: 2.000488 MiB 01:00:47.614 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74562 01:00:47.614 element at address: 0x2000002d7d00 with size: 1.008118 MiB 01:00:47.614 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74562 01:00:47.614 element at address: 0x20000b2fde40 with size: 1.008118 MiB 01:00:47.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 01:00:47.614 element at address: 0x2000194bc800 with size: 1.008118 MiB 01:00:47.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 01:00:47.614 element at address: 0x2000070fde40 with size: 1.008118 MiB 01:00:47.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 01:00:47.614 element at address: 0x2000008fd240 with size: 1.008118 MiB 01:00:47.614 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 01:00:47.614 element at address: 0x200003eff180 with size: 1.000488 MiB 01:00:47.614 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74562 01:00:47.614 element at address: 0x200003affc00 with size: 1.000488 MiB 01:00:47.614 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74562 01:00:47.614 element at address: 0x2000138fa980 with size: 1.000488 MiB 01:00:47.614 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74562 01:00:47.614 element at address: 0x200031cfe940 with size: 1.000488 MiB 01:00:47.614 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74562 01:00:47.614 element at address: 0x200003a5b100 with size: 0.500488 MiB 01:00:47.614 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74562 01:00:47.614 element at address: 0x20000b27db80 with size: 0.500488 MiB 01:00:47.614 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 01:00:47.614 element at address: 0x20000087cf80 with size: 0.500488 MiB 01:00:47.614 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 01:00:47.614 element at address: 0x20001947c540 with size: 0.250488 MiB 01:00:47.614 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 01:00:47.614 element at address: 0x200003adf880 with size: 0.125488 MiB 01:00:47.614 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74562 01:00:47.614 element at address: 0x2000070f5b80 with size: 0.031738 MiB 01:00:47.614 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 01:00:47.614 element at address: 0x200027e66100 with size: 0.023743 MiB 01:00:47.614 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 01:00:47.614 element at address: 0x200003adb5c0 with size: 0.016113 MiB 01:00:47.614 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74562 01:00:47.614 element at address: 0x200027e6c240 with size: 0.002441 MiB 01:00:47.614 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 01:00:47.614 element at address: 0x2000002d7080 with size: 0.000305 MiB 01:00:47.614 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74562 01:00:47.614 element at address: 0x200003adb3c0 with size: 0.000305 MiB 01:00:47.614 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74562 01:00:47.614 element at address: 0x200027e6cd00 with size: 0.000305 MiB 01:00:47.614 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 01:00:47.614 10:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 01:00:47.614 10:57:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74562 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 74562 ']' 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 74562 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74562 01:00:47.614 killing process with pid 74562 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74562' 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 74562 01:00:47.614 10:57:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 74562 01:00:48.264 01:00:48.264 real 0m1.735s 01:00:48.264 user 0m1.886s 01:00:48.264 sys 0m0.473s 01:00:48.264 ************************************ 01:00:48.264 END TEST dpdk_mem_utility 01:00:48.264 ************************************ 01:00:48.264 10:57:53 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:48.264 10:57:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:00:48.264 10:57:53 -- common/autotest_common.sh@1142 -- # return 0 01:00:48.264 10:57:53 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:00:48.264 10:57:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:48.264 10:57:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:48.264 10:57:53 -- common/autotest_common.sh@10 -- # set +x 01:00:48.264 ************************************ 01:00:48.264 START TEST event 01:00:48.264 ************************************ 01:00:48.264 10:57:53 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:00:48.264 * Looking for test storage... 01:00:48.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:00:48.264 10:57:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:00:48.264 10:57:53 event -- bdev/nbd_common.sh@6 -- # set -e 01:00:48.264 10:57:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:00:48.264 10:57:53 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:00:48.264 10:57:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:48.264 10:57:53 event -- common/autotest_common.sh@10 -- # set +x 01:00:48.264 ************************************ 01:00:48.264 START TEST event_perf 01:00:48.264 ************************************ 01:00:48.264 10:57:53 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:00:48.264 Running I/O for 1 seconds...[2024-07-22 10:57:53.278323] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:48.264 [2024-07-22 10:57:53.278415] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74652 ] 01:00:48.264 [2024-07-22 10:57:53.416973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:00:48.522 [2024-07-22 10:57:53.507115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:48.522 [2024-07-22 10:57:53.507228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:48.522 [2024-07-22 10:57:53.507331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:48.522 Running I/O for 1 seconds...[2024-07-22 10:57:53.507333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:00:49.455 01:00:49.455 lcore 0: 193425 01:00:49.455 lcore 1: 193421 01:00:49.455 lcore 2: 193422 01:00:49.455 lcore 3: 193423 01:00:49.455 done. 01:00:49.455 01:00:49.455 real 0m1.319s 01:00:49.455 user 0m4.137s 01:00:49.455 sys 0m0.063s 01:00:49.455 10:57:54 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:49.455 10:57:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 01:00:49.455 ************************************ 01:00:49.455 END TEST event_perf 01:00:49.455 ************************************ 01:00:49.455 10:57:54 event -- common/autotest_common.sh@1142 -- # return 0 01:00:49.455 10:57:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:00:49.455 10:57:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:00:49.455 10:57:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:49.455 10:57:54 event -- common/autotest_common.sh@10 -- # set +x 01:00:49.455 ************************************ 01:00:49.455 START TEST event_reactor 01:00:49.455 ************************************ 01:00:49.455 10:57:54 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:00:49.455 [2024-07-22 10:57:54.644039] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:49.455 [2024-07-22 10:57:54.644926] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74690 ] 01:00:49.712 [2024-07-22 10:57:54.780403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:49.713 [2024-07-22 10:57:54.864603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:51.089 test_start 01:00:51.089 oneshot 01:00:51.089 tick 100 01:00:51.089 tick 100 01:00:51.089 tick 250 01:00:51.089 tick 100 01:00:51.089 tick 100 01:00:51.089 tick 100 01:00:51.089 tick 250 01:00:51.089 tick 500 01:00:51.089 tick 100 01:00:51.089 tick 100 01:00:51.089 tick 250 01:00:51.089 tick 100 01:00:51.089 tick 100 01:00:51.089 test_end 01:00:51.089 ************************************ 01:00:51.089 END TEST event_reactor 01:00:51.089 ************************************ 01:00:51.089 01:00:51.089 real 0m1.302s 01:00:51.089 user 0m1.136s 01:00:51.089 sys 0m0.059s 01:00:51.089 10:57:55 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:51.089 10:57:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 01:00:51.089 10:57:55 event -- common/autotest_common.sh@1142 -- # return 0 01:00:51.089 10:57:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:00:51.089 10:57:55 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:00:51.089 10:57:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:51.089 10:57:55 event -- common/autotest_common.sh@10 -- # set +x 01:00:51.089 ************************************ 01:00:51.089 START TEST event_reactor_perf 01:00:51.089 ************************************ 01:00:51.089 10:57:55 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:00:51.089 [2024-07-22 10:57:56.000986] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:51.089 [2024-07-22 10:57:56.001094] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74720 ] 01:00:51.089 [2024-07-22 10:57:56.138775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:51.089 [2024-07-22 10:57:56.211064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:52.461 test_start 01:00:52.461 test_end 01:00:52.461 Performance: 418643 events per second 01:00:52.461 01:00:52.461 real 0m1.303s 01:00:52.461 user 0m1.147s 01:00:52.461 sys 0m0.051s 01:00:52.461 10:57:57 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:52.461 10:57:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 01:00:52.461 ************************************ 01:00:52.461 END TEST event_reactor_perf 01:00:52.461 ************************************ 01:00:52.461 10:57:57 event -- common/autotest_common.sh@1142 -- # return 0 01:00:52.461 10:57:57 event -- event/event.sh@49 -- # uname -s 01:00:52.461 10:57:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 01:00:52.461 10:57:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:00:52.461 10:57:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:52.461 10:57:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:52.461 10:57:57 event -- common/autotest_common.sh@10 -- # set +x 01:00:52.461 ************************************ 01:00:52.461 START TEST event_scheduler 01:00:52.461 ************************************ 01:00:52.461 10:57:57 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:00:52.461 * Looking for test storage... 01:00:52.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 01:00:52.461 10:57:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 01:00:52.461 10:57:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74782 01:00:52.461 10:57:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 01:00:52.461 10:57:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 01:00:52.461 10:57:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74782 01:00:52.461 10:57:57 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 74782 ']' 01:00:52.461 10:57:57 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:52.461 10:57:57 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:52.461 10:57:57 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:52.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:52.461 10:57:57 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:52.461 10:57:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:00:52.461 [2024-07-22 10:57:57.482918] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:52.461 [2024-07-22 10:57:57.483267] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74782 ] 01:00:52.461 [2024-07-22 10:57:57.624813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:00:52.720 [2024-07-22 10:57:57.725357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:52.720 [2024-07-22 10:57:57.725516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:52.720 [2024-07-22 10:57:57.725646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:00:52.720 [2024-07-22 10:57:57.725653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:00:53.290 10:57:58 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:53.290 10:57:58 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 01:00:53.290 10:57:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 01:00:53.290 10:57:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.290 10:57:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:00:53.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:00:53.549 POWER: Cannot set governor of lcore 0 to userspace 01:00:53.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:00:53.549 POWER: Cannot set governor of lcore 0 to performance 01:00:53.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:00:53.549 POWER: Cannot set governor of lcore 0 to userspace 01:00:53.549 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:00:53.549 POWER: Cannot set governor of lcore 0 to userspace 01:00:53.550 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 01:00:53.550 POWER: Unable to set Power Management Environment for lcore 0 01:00:53.550 [2024-07-22 10:57:58.501822] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 01:00:53.550 [2024-07-22 10:57:58.501837] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 01:00:53.550 [2024-07-22 10:57:58.501845] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 01:00:53.550 [2024-07-22 10:57:58.501858] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 01:00:53.550 [2024-07-22 10:57:58.501866] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 01:00:53.550 [2024-07-22 10:57:58.501873] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 01:00:53.550 10:57:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 01:00:53.550 10:57:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 [2024-07-22 10:57:58.597636] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 01:00:53.550 10:57:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 01:00:53.550 10:57:58 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:53.550 10:57:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 ************************************ 01:00:53.550 START TEST scheduler_create_thread 01:00:53.550 ************************************ 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 2 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 3 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 4 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 5 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 6 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 7 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 8 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 9 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 10 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:53.550 10:57:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:55.454 10:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:55.454 10:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 01:00:55.454 10:58:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 01:00:55.454 10:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 01:00:55.454 10:58:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:56.389 ************************************ 01:00:56.389 END TEST scheduler_create_thread 01:00:56.389 ************************************ 01:00:56.389 10:58:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:00:56.389 01:00:56.389 real 0m2.618s 01:00:56.389 user 0m0.018s 01:00:56.389 sys 0m0.007s 01:00:56.389 10:58:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:56.389 10:58:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 01:00:56.389 10:58:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:00:56.389 10:58:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74782 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 74782 ']' 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 74782 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74782 01:00:56.389 killing process with pid 74782 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74782' 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 74782 01:00:56.389 10:58:01 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 74782 01:00:56.647 [2024-07-22 10:58:01.707138] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 01:00:56.905 ************************************ 01:00:56.905 END TEST event_scheduler 01:00:56.905 ************************************ 01:00:56.905 01:00:56.905 real 0m4.727s 01:00:56.905 user 0m8.932s 01:00:56.905 sys 0m0.420s 01:00:56.905 10:58:02 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 01:00:56.905 10:58:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:00:57.163 10:58:02 event -- common/autotest_common.sh@1142 -- # return 0 01:00:57.163 10:58:02 event -- event/event.sh@51 -- # modprobe -n nbd 01:00:57.163 10:58:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 01:00:57.163 10:58:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:00:57.163 10:58:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:00:57.163 10:58:02 event -- common/autotest_common.sh@10 -- # set +x 01:00:57.163 ************************************ 01:00:57.163 START TEST app_repeat 01:00:57.163 ************************************ 01:00:57.163 10:58:02 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74900 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 01:00:57.163 Process app_repeat pid: 74900 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74900' 01:00:57.163 spdk_app_start Round 0 01:00:57.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 01:00:57.163 10:58:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74900 /var/tmp/spdk-nbd.sock 01:00:57.163 10:58:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74900 ']' 01:00:57.164 10:58:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:00:57.164 10:58:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:00:57.164 10:58:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:00:57.164 10:58:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:00:57.164 10:58:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:00:57.164 [2024-07-22 10:58:02.162379] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:00:57.164 [2024-07-22 10:58:02.162763] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74900 ] 01:00:57.164 [2024-07-22 10:58:02.303361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:00:57.421 [2024-07-22 10:58:02.417505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:00:57.421 [2024-07-22 10:58:02.417519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:00:57.421 10:58:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:00:57.421 10:58:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:00:57.421 10:58:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:00:57.678 Malloc0 01:00:57.678 10:58:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:00:57.936 Malloc1 01:00:58.193 10:58:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:00:58.193 10:58:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:00:58.451 /dev/nbd0 01:00:58.451 10:58:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:00:58.451 10:58:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:00:58.451 1+0 records in 01:00:58.451 1+0 records out 01:00:58.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026916 s, 15.2 MB/s 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:00:58.451 10:58:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:00:58.451 10:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:00:58.451 10:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:00:58.451 10:58:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:00:58.709 /dev/nbd1 01:00:58.709 10:58:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:00:58.709 10:58:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:00:58.709 1+0 records in 01:00:58.709 1+0 records out 01:00:58.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318049 s, 12.9 MB/s 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:00:58.709 10:58:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:00:58.709 10:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:00:58.709 10:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:00:58.709 10:58:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:00:58.709 10:58:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:58.709 10:58:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:00:58.966 10:58:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:00:58.966 { 01:00:58.966 "bdev_name": "Malloc0", 01:00:58.966 "nbd_device": "/dev/nbd0" 01:00:58.966 }, 01:00:58.966 { 01:00:58.966 "bdev_name": "Malloc1", 01:00:58.966 "nbd_device": "/dev/nbd1" 01:00:58.966 } 01:00:58.966 ]' 01:00:58.966 10:58:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:00:58.966 { 01:00:58.966 "bdev_name": "Malloc0", 01:00:58.966 "nbd_device": "/dev/nbd0" 01:00:58.966 }, 01:00:58.966 { 01:00:58.966 "bdev_name": "Malloc1", 01:00:58.966 "nbd_device": "/dev/nbd1" 01:00:58.966 } 01:00:58.966 ]' 01:00:58.966 10:58:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:00:59.224 /dev/nbd1' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:00:59.224 /dev/nbd1' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:00:59.224 256+0 records in 01:00:59.224 256+0 records out 01:00:59.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630973 s, 166 MB/s 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:00:59.224 256+0 records in 01:00:59.224 256+0 records out 01:00:59.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272611 s, 38.5 MB/s 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:00:59.224 256+0 records in 01:00:59.224 256+0 records out 01:00:59.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277827 s, 37.7 MB/s 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:00:59.224 10:58:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:00:59.483 10:58:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:59.741 10:58:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:00:59.999 10:58:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:00:59.999 10:58:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:00:59.999 10:58:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:01:00.257 10:58:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:01:00.257 10:58:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:01:00.517 10:58:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:01:00.775 [2024-07-22 10:58:05.752253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:01:00.775 [2024-07-22 10:58:05.818248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:01:00.775 [2024-07-22 10:58:05.818259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:00.775 [2024-07-22 10:58:05.875759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:01:00.775 [2024-07-22 10:58:05.875824] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:01:04.062 spdk_app_start Round 1 01:01:04.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:01:04.062 10:58:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:01:04.062 10:58:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 01:01:04.062 10:58:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74900 /var/tmp/spdk-nbd.sock 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74900 ']' 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:04.062 10:58:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:01:04.062 10:58:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:01:04.062 Malloc0 01:01:04.062 10:58:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:01:04.320 Malloc1 01:01:04.320 10:58:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:01:04.320 10:58:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:01:04.578 /dev/nbd0 01:01:04.578 10:58:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:01:04.578 10:58:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:01:04.578 1+0 records in 01:01:04.578 1+0 records out 01:01:04.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274249 s, 14.9 MB/s 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:01:04.578 10:58:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:01:04.578 10:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:01:04.578 10:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:01:04.578 10:58:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:01:04.837 /dev/nbd1 01:01:04.837 10:58:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:01:04.837 10:58:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:01:04.837 1+0 records in 01:01:04.837 1+0 records out 01:01:04.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386158 s, 10.6 MB/s 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:01:04.837 10:58:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:01:04.837 10:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:01:04.837 10:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:01:04.837 10:58:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:01:04.837 10:58:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:04.837 10:58:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:01:05.095 { 01:01:05.095 "bdev_name": "Malloc0", 01:01:05.095 "nbd_device": "/dev/nbd0" 01:01:05.095 }, 01:01:05.095 { 01:01:05.095 "bdev_name": "Malloc1", 01:01:05.095 "nbd_device": "/dev/nbd1" 01:01:05.095 } 01:01:05.095 ]' 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:01:05.095 { 01:01:05.095 "bdev_name": "Malloc0", 01:01:05.095 "nbd_device": "/dev/nbd0" 01:01:05.095 }, 01:01:05.095 { 01:01:05.095 "bdev_name": "Malloc1", 01:01:05.095 "nbd_device": "/dev/nbd1" 01:01:05.095 } 01:01:05.095 ]' 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:01:05.095 /dev/nbd1' 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:01:05.095 /dev/nbd1' 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:01:05.095 256+0 records in 01:01:05.095 256+0 records out 01:01:05.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00846644 s, 124 MB/s 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:01:05.095 256+0 records in 01:01:05.095 256+0 records out 01:01:05.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251433 s, 41.7 MB/s 01:01:05.095 10:58:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:01:05.096 10:58:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:01:05.353 256+0 records in 01:01:05.353 256+0 records out 01:01:05.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0376944 s, 27.8 MB/s 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:01:05.353 10:58:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:01:05.610 10:58:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:05.867 10:58:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:01:06.124 10:58:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:01:06.124 10:58:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:01:06.382 10:58:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:01:06.640 [2024-07-22 10:58:11.739591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:01:06.640 [2024-07-22 10:58:11.810768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:01:06.640 [2024-07-22 10:58:11.810781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:06.899 [2024-07-22 10:58:11.878878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:01:06.899 [2024-07-22 10:58:11.879004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:01:09.436 10:58:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:01:09.436 spdk_app_start Round 2 01:01:09.436 10:58:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:01:09.436 10:58:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74900 /var/tmp/spdk-nbd.sock 01:01:09.436 10:58:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74900 ']' 01:01:09.436 10:58:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:01:09.436 10:58:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:09.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:01:09.436 10:58:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:01:09.436 10:58:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:09.436 10:58:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:01:09.694 10:58:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:09.694 10:58:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:01:09.694 10:58:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:01:09.952 Malloc0 01:01:09.952 10:58:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:01:10.210 Malloc1 01:01:10.210 10:58:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:01:10.210 10:58:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:01:10.776 /dev/nbd0 01:01:10.776 10:58:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:01:10.776 10:58:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:01:10.776 1+0 records in 01:01:10.776 1+0 records out 01:01:10.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218492 s, 18.7 MB/s 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:01:10.776 10:58:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:01:10.776 10:58:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:01:10.776 10:58:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:01:10.776 10:58:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:01:11.033 /dev/nbd1 01:01:11.033 10:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:01:11.033 10:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:01:11.033 1+0 records in 01:01:11.033 1+0 records out 01:01:11.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386172 s, 10.6 MB/s 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 01:01:11.033 10:58:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 01:01:11.033 10:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:01:11.033 10:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:01:11.033 10:58:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:01:11.033 10:58:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:11.033 10:58:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:01:11.291 { 01:01:11.291 "bdev_name": "Malloc0", 01:01:11.291 "nbd_device": "/dev/nbd0" 01:01:11.291 }, 01:01:11.291 { 01:01:11.291 "bdev_name": "Malloc1", 01:01:11.291 "nbd_device": "/dev/nbd1" 01:01:11.291 } 01:01:11.291 ]' 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:01:11.291 { 01:01:11.291 "bdev_name": "Malloc0", 01:01:11.291 "nbd_device": "/dev/nbd0" 01:01:11.291 }, 01:01:11.291 { 01:01:11.291 "bdev_name": "Malloc1", 01:01:11.291 "nbd_device": "/dev/nbd1" 01:01:11.291 } 01:01:11.291 ]' 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:01:11.291 /dev/nbd1' 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:01:11.291 /dev/nbd1' 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:01:11.291 256+0 records in 01:01:11.291 256+0 records out 01:01:11.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00893248 s, 117 MB/s 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:01:11.291 256+0 records in 01:01:11.291 256+0 records out 01:01:11.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310396 s, 33.8 MB/s 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:01:11.291 256+0 records in 01:01:11.291 256+0 records out 01:01:11.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304905 s, 34.4 MB/s 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:01:11.291 10:58:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:01:11.549 10:58:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:01:11.807 10:58:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:01:12.064 10:58:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:01:12.322 10:58:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:01:12.322 10:58:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:01:12.902 10:58:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:01:12.902 [2024-07-22 10:58:18.021168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:01:12.902 [2024-07-22 10:58:18.090625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:01:12.902 [2024-07-22 10:58:18.090644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:13.161 [2024-07-22 10:58:18.156944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:01:13.161 [2024-07-22 10:58:18.157024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:01:15.694 10:58:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74900 /var/tmp/spdk-nbd.sock 01:01:15.694 10:58:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74900 ']' 01:01:15.694 10:58:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:01:15.694 10:58:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:15.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:01:15.694 10:58:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:01:15.694 10:58:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:15.694 10:58:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 01:01:15.952 10:58:21 event.app_repeat -- event/event.sh@39 -- # killprocess 74900 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 74900 ']' 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 74900 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@953 -- # uname 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74900 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:15.952 killing process with pid 74900 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74900' 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 74900 01:01:15.952 10:58:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 74900 01:01:16.210 spdk_app_start is called in Round 0. 01:01:16.210 Shutdown signal received, stop current app iteration 01:01:16.210 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 reinitialization... 01:01:16.210 spdk_app_start is called in Round 1. 01:01:16.210 Shutdown signal received, stop current app iteration 01:01:16.210 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 reinitialization... 01:01:16.210 spdk_app_start is called in Round 2. 01:01:16.210 Shutdown signal received, stop current app iteration 01:01:16.210 Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 reinitialization... 01:01:16.210 spdk_app_start is called in Round 3. 01:01:16.210 Shutdown signal received, stop current app iteration 01:01:16.210 10:58:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:01:16.210 10:58:21 event.app_repeat -- event/event.sh@42 -- # return 0 01:01:16.210 01:01:16.210 real 0m19.233s 01:01:16.210 user 0m43.280s 01:01:16.210 sys 0m3.285s 01:01:16.210 10:58:21 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:16.210 ************************************ 01:01:16.210 END TEST app_repeat 01:01:16.210 10:58:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:01:16.210 ************************************ 01:01:16.210 10:58:21 event -- common/autotest_common.sh@1142 -- # return 0 01:01:16.210 10:58:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:01:16.210 10:58:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:01:16.210 10:58:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:16.210 10:58:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:16.210 10:58:21 event -- common/autotest_common.sh@10 -- # set +x 01:01:16.468 ************************************ 01:01:16.468 START TEST cpu_locks 01:01:16.468 ************************************ 01:01:16.469 10:58:21 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:01:16.469 * Looking for test storage... 01:01:16.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:01:16.469 10:58:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:01:16.469 10:58:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:01:16.469 10:58:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:01:16.469 10:58:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:01:16.469 10:58:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:16.469 10:58:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:16.469 10:58:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:16.469 ************************************ 01:01:16.469 START TEST default_locks 01:01:16.469 ************************************ 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75524 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75524 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75524 ']' 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:16.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:16.469 10:58:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:01:16.469 [2024-07-22 10:58:21.584403] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:16.469 [2024-07-22 10:58:21.585193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75524 ] 01:01:16.727 [2024-07-22 10:58:21.732702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:16.727 [2024-07-22 10:58:21.833177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:17.662 10:58:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:17.662 10:58:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 01:01:17.662 10:58:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75524 01:01:17.662 10:58:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75524 01:01:17.662 10:58:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75524 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 75524 ']' 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 75524 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75524 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:17.920 killing process with pid 75524 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75524' 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 75524 01:01:17.920 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 75524 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75524 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75524 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75524 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75524 ']' 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:18.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:01:18.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75524) - No such process 01:01:18.485 ERROR: process (pid: 75524) is no longer running 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:01:18.485 01:01:18.485 real 0m1.963s 01:01:18.485 user 0m2.136s 01:01:18.485 sys 0m0.612s 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:18.485 10:58:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:01:18.485 ************************************ 01:01:18.485 END TEST default_locks 01:01:18.485 ************************************ 01:01:18.485 10:58:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:01:18.485 10:58:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:01:18.485 10:58:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:18.485 10:58:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:18.485 10:58:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:18.485 ************************************ 01:01:18.485 START TEST default_locks_via_rpc 01:01:18.485 ************************************ 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75588 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75588 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75588 ']' 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:18.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:18.485 10:58:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:18.485 [2024-07-22 10:58:23.603287] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:18.485 [2024-07-22 10:58:23.603436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75588 ] 01:01:18.743 [2024-07-22 10:58:23.737016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:18.743 [2024-07-22 10:58:23.814547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75588 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75588 01:01:19.309 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75588 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 75588 ']' 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 75588 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75588 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:19.913 killing process with pid 75588 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75588' 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 75588 01:01:19.913 10:58:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 75588 01:01:20.171 01:01:20.171 real 0m1.792s 01:01:20.171 user 0m1.882s 01:01:20.171 sys 0m0.555s 01:01:20.171 10:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:20.171 10:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:20.171 ************************************ 01:01:20.171 END TEST default_locks_via_rpc 01:01:20.171 ************************************ 01:01:20.171 10:58:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:01:20.171 10:58:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:01:20.171 10:58:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:20.171 10:58:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:20.171 10:58:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:20.171 ************************************ 01:01:20.171 START TEST non_locking_app_on_locked_coremask 01:01:20.171 ************************************ 01:01:20.171 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75657 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75657 /var/tmp/spdk.sock 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75657 ']' 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:20.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:20.429 10:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:01:20.429 [2024-07-22 10:58:25.445420] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:20.429 [2024-07-22 10:58:25.445530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75657 ] 01:01:20.429 [2024-07-22 10:58:25.580965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:20.687 [2024-07-22 10:58:25.666820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75685 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75685 /var/tmp/spdk2.sock 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75685 ']' 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:01:21.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:21.254 10:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:01:21.512 [2024-07-22 10:58:26.494993] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:21.512 [2024-07-22 10:58:26.495090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75685 ] 01:01:21.512 [2024-07-22 10:58:26.638902] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:01:21.512 [2024-07-22 10:58:26.638949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:21.770 [2024-07-22 10:58:26.824823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:22.336 10:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:22.336 10:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 01:01:22.336 10:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75657 01:01:22.336 10:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75657 01:01:22.336 10:58:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75657 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75657 ']' 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75657 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75657 01:01:23.284 killing process with pid 75657 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75657' 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75657 01:01:23.284 10:58:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75657 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75685 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75685 ']' 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75685 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75685 01:01:24.220 killing process with pid 75685 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75685' 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75685 01:01:24.220 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75685 01:01:24.787 01:01:24.787 real 0m4.346s 01:01:24.787 user 0m4.782s 01:01:24.787 sys 0m1.269s 01:01:24.787 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:24.787 ************************************ 01:01:24.787 END TEST non_locking_app_on_locked_coremask 01:01:24.787 ************************************ 01:01:24.787 10:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:24.787 10:58:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:01:24.787 10:58:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:01:24.787 10:58:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:24.787 10:58:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:24.787 10:58:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:24.787 ************************************ 01:01:24.787 START TEST locking_app_on_unlocked_coremask 01:01:24.787 ************************************ 01:01:24.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75764 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75764 /var/tmp/spdk.sock 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75764 ']' 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:24.787 10:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:24.787 [2024-07-22 10:58:29.853849] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:24.787 [2024-07-22 10:58:29.853989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75764 ] 01:01:25.046 [2024-07-22 10:58:29.997096] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:01:25.046 [2024-07-22 10:58:29.997155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:25.046 [2024-07-22 10:58:30.090965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75792 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75792 /var/tmp/spdk2.sock 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75792 ']' 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:01:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:25.980 10:58:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:25.980 [2024-07-22 10:58:30.991766] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:25.980 [2024-07-22 10:58:30.992445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75792 ] 01:01:25.980 [2024-07-22 10:58:31.141885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:26.239 [2024-07-22 10:58:31.330412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:26.805 10:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:26.805 10:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 01:01:26.805 10:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75792 01:01:26.805 10:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75792 01:01:26.805 10:58:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75764 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75764 ']' 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75764 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75764 01:01:27.736 killing process with pid 75764 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75764' 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75764 01:01:27.736 10:58:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75764 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75792 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75792 ']' 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75792 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75792 01:01:28.668 killing process with pid 75792 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75792' 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75792 01:01:28.668 10:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75792 01:01:29.234 01:01:29.234 real 0m4.494s 01:01:29.234 user 0m4.934s 01:01:29.234 sys 0m1.306s 01:01:29.234 ************************************ 01:01:29.234 END TEST locking_app_on_unlocked_coremask 01:01:29.234 ************************************ 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:29.234 10:58:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:01:29.234 10:58:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:01:29.234 10:58:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:29.234 10:58:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:29.234 10:58:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:29.234 ************************************ 01:01:29.234 START TEST locking_app_on_locked_coremask 01:01:29.234 ************************************ 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75882 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75882 /var/tmp/spdk.sock 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75882 ']' 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:29.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:29.234 10:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:29.234 [2024-07-22 10:58:34.395112] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:29.234 [2024-07-22 10:58:34.395396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75882 ] 01:01:29.492 [2024-07-22 10:58:34.538270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:29.492 [2024-07-22 10:58:34.641368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75910 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75910 /var/tmp/spdk2.sock 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75910 /var/tmp/spdk2.sock 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75910 /var/tmp/spdk2.sock 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75910 ']' 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:01:30.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:30.425 10:58:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:30.425 [2024-07-22 10:58:35.496877] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:30.425 [2024-07-22 10:58:35.497245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75910 ] 01:01:30.684 [2024-07-22 10:58:35.647083] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75882 has claimed it. 01:01:30.684 [2024-07-22 10:58:35.650207] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:01:31.249 ERROR: process (pid: 75910) is no longer running 01:01:31.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75910) - No such process 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75882 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75882 01:01:31.249 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75882 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75882 ']' 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75882 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75882 01:01:31.507 killing process with pid 75882 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75882' 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75882 01:01:31.507 10:58:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75882 01:01:32.084 01:01:32.084 real 0m2.793s 01:01:32.084 user 0m3.200s 01:01:32.084 sys 0m0.732s 01:01:32.084 10:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:32.084 ************************************ 01:01:32.084 END TEST locking_app_on_locked_coremask 01:01:32.084 ************************************ 01:01:32.084 10:58:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:32.084 10:58:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:01:32.084 10:58:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:01:32.084 10:58:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:32.084 10:58:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:32.084 10:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:32.084 ************************************ 01:01:32.084 START TEST locking_overlapped_coremask 01:01:32.084 ************************************ 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75962 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75962 /var/tmp/spdk.sock 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75962 ']' 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:32.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:32.084 10:58:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:32.084 [2024-07-22 10:58:37.277159] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:32.084 [2024-07-22 10:58:37.278744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75962 ] 01:01:32.342 [2024-07-22 10:58:37.428094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:01:32.342 [2024-07-22 10:58:37.524891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:01:32.342 [2024-07-22 10:58:37.525032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:01:32.342 [2024-07-22 10:58:37.525040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75992 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75992 /var/tmp/spdk2.sock 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75992 /var/tmp/spdk2.sock 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75992 /var/tmp/spdk2.sock 01:01:33.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75992 ']' 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:33.277 10:58:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:33.277 [2024-07-22 10:58:38.352306] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:33.277 [2024-07-22 10:58:38.352402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75992 ] 01:01:33.536 [2024-07-22 10:58:38.501020] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75962 has claimed it. 01:01:33.536 [2024-07-22 10:58:38.501111] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:01:34.102 ERROR: process (pid: 75992) is no longer running 01:01:34.102 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75992) - No such process 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75962 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 75962 ']' 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 75962 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75962 01:01:34.102 killing process with pid 75962 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75962' 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 75962 01:01:34.102 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 75962 01:01:34.360 ************************************ 01:01:34.360 END TEST locking_overlapped_coremask 01:01:34.360 ************************************ 01:01:34.360 01:01:34.360 real 0m2.261s 01:01:34.360 user 0m6.227s 01:01:34.360 sys 0m0.506s 01:01:34.360 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:34.360 10:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:01:34.360 10:58:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:01:34.360 10:58:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:01:34.360 10:58:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:34.360 10:58:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:34.360 10:58:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:34.360 ************************************ 01:01:34.360 START TEST locking_overlapped_coremask_via_rpc 01:01:34.360 ************************************ 01:01:34.360 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 01:01:34.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:34.360 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76038 01:01:34.360 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76038 /var/tmp/spdk.sock 01:01:34.360 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76038 ']' 01:01:34.361 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:01:34.361 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:34.361 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:34.361 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:34.361 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:34.361 10:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:34.361 [2024-07-22 10:58:39.541149] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:34.361 [2024-07-22 10:58:39.541230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76038 ] 01:01:34.619 [2024-07-22 10:58:39.672120] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:01:34.619 [2024-07-22 10:58:39.672165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:01:34.619 [2024-07-22 10:58:39.764518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:01:34.619 [2024-07-22 10:58:39.764627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:01:34.619 [2024-07-22 10:58:39.764632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76069 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76069 /var/tmp/spdk2.sock 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76069 ']' 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:01:35.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:35.549 10:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:35.549 [2024-07-22 10:58:40.548888] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:35.549 [2024-07-22 10:58:40.549018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76069 ] 01:01:35.549 [2024-07-22 10:58:40.696935] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:01:35.549 [2024-07-22 10:58:40.697017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:01:35.808 [2024-07-22 10:58:40.901678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:01:35.808 [2024-07-22 10:58:40.901813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:01:35.808 [2024-07-22 10:58:40.901814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:36.373 [2024-07-22 10:58:41.533120] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76038 has claimed it. 01:01:36.373 2024/07/22 10:58:41 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 01:01:36.373 request: 01:01:36.373 { 01:01:36.373 "method": "framework_enable_cpumask_locks", 01:01:36.373 "params": {} 01:01:36.373 } 01:01:36.373 Got JSON-RPC error response 01:01:36.373 GoRPCClient: error on JSON-RPC call 01:01:36.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76038 /var/tmp/spdk.sock 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76038 ']' 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:36.373 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76069 /var/tmp/spdk2.sock 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 76069 ']' 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:01:36.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:36.938 10:58:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:01:37.196 ************************************ 01:01:37.196 END TEST locking_overlapped_coremask_via_rpc 01:01:37.196 ************************************ 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:01:37.196 01:01:37.196 real 0m2.681s 01:01:37.196 user 0m1.392s 01:01:37.196 sys 0m0.231s 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:37.196 10:58:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 01:01:37.196 10:58:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:01:37.196 10:58:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76038 ]] 01:01:37.196 10:58:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76038 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76038 ']' 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76038 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76038 01:01:37.196 killing process with pid 76038 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76038' 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 76038 01:01:37.196 10:58:42 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 76038 01:01:37.454 10:58:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76069 ]] 01:01:37.454 10:58:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76069 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76069 ']' 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76069 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76069 01:01:37.454 killing process with pid 76069 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76069' 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 76069 01:01:37.454 10:58:42 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 76069 01:01:38.020 10:58:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:01:38.020 Process with pid 76038 is not found 01:01:38.020 Process with pid 76069 is not found 01:01:38.020 10:58:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:01:38.020 10:58:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76038 ]] 01:01:38.020 10:58:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76038 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76038 ']' 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76038 01:01:38.020 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (76038) - No such process 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 76038 is not found' 01:01:38.020 10:58:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76069 ]] 01:01:38.020 10:58:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76069 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 76069 ']' 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 76069 01:01:38.020 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (76069) - No such process 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 76069 is not found' 01:01:38.020 10:58:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:01:38.020 ************************************ 01:01:38.020 END TEST cpu_locks 01:01:38.020 ************************************ 01:01:38.020 01:01:38.020 real 0m21.607s 01:01:38.020 user 0m37.249s 01:01:38.020 sys 0m6.097s 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:38.020 10:58:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:01:38.020 10:58:43 event -- common/autotest_common.sh@1142 -- # return 0 01:01:38.020 ************************************ 01:01:38.020 END TEST event 01:01:38.020 ************************************ 01:01:38.020 01:01:38.020 real 0m49.894s 01:01:38.020 user 1m36.017s 01:01:38.020 sys 0m10.214s 01:01:38.020 10:58:43 event -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:38.020 10:58:43 event -- common/autotest_common.sh@10 -- # set +x 01:01:38.020 10:58:43 -- common/autotest_common.sh@1142 -- # return 0 01:01:38.020 10:58:43 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:01:38.020 10:58:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:38.020 10:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:38.020 10:58:43 -- common/autotest_common.sh@10 -- # set +x 01:01:38.020 ************************************ 01:01:38.020 START TEST thread 01:01:38.020 ************************************ 01:01:38.020 10:58:43 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:01:38.020 * Looking for test storage... 01:01:38.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:01:38.020 10:58:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:01:38.020 10:58:43 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 01:01:38.020 10:58:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:38.020 10:58:43 thread -- common/autotest_common.sh@10 -- # set +x 01:01:38.020 ************************************ 01:01:38.020 START TEST thread_poller_perf 01:01:38.020 ************************************ 01:01:38.020 10:58:43 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:01:38.020 [2024-07-22 10:58:43.219828] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:38.020 [2024-07-22 10:58:43.219913] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76215 ] 01:01:38.278 [2024-07-22 10:58:43.353697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:38.278 [2024-07-22 10:58:43.448230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:38.278 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:01:39.655 ====================================== 01:01:39.655 busy:2208334046 (cyc) 01:01:39.655 total_run_count: 364000 01:01:39.655 tsc_hz: 2200000000 (cyc) 01:01:39.655 ====================================== 01:01:39.655 poller_cost: 6066 (cyc), 2757 (nsec) 01:01:39.655 01:01:39.655 real 0m1.317s 01:01:39.655 user 0m1.159s 01:01:39.655 sys 0m0.050s 01:01:39.655 10:58:44 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:39.655 ************************************ 01:01:39.655 END TEST thread_poller_perf 01:01:39.655 10:58:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:01:39.655 ************************************ 01:01:39.655 10:58:44 thread -- common/autotest_common.sh@1142 -- # return 0 01:01:39.655 10:58:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:01:39.655 10:58:44 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 01:01:39.655 10:58:44 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:39.655 10:58:44 thread -- common/autotest_common.sh@10 -- # set +x 01:01:39.655 ************************************ 01:01:39.655 START TEST thread_poller_perf 01:01:39.655 ************************************ 01:01:39.655 10:58:44 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:01:39.655 [2024-07-22 10:58:44.600615] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:39.655 [2024-07-22 10:58:44.600725] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76251 ] 01:01:39.655 [2024-07-22 10:58:44.741708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:39.655 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:01:39.655 [2024-07-22 10:58:44.828063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:41.026 ====================================== 01:01:41.026 busy:2202377339 (cyc) 01:01:41.026 total_run_count: 3632000 01:01:41.026 tsc_hz: 2200000000 (cyc) 01:01:41.026 ====================================== 01:01:41.026 poller_cost: 606 (cyc), 275 (nsec) 01:01:41.026 01:01:41.026 real 0m1.324s 01:01:41.026 user 0m1.152s 01:01:41.026 sys 0m0.063s 01:01:41.026 10:58:45 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:41.026 ************************************ 01:01:41.026 END TEST thread_poller_perf 01:01:41.026 ************************************ 01:01:41.026 10:58:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:01:41.026 10:58:45 thread -- common/autotest_common.sh@1142 -- # return 0 01:01:41.026 10:58:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:01:41.026 ************************************ 01:01:41.026 END TEST thread 01:01:41.026 ************************************ 01:01:41.026 01:01:41.026 real 0m2.832s 01:01:41.026 user 0m2.385s 01:01:41.026 sys 0m0.224s 01:01:41.026 10:58:45 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:41.026 10:58:45 thread -- common/autotest_common.sh@10 -- # set +x 01:01:41.026 10:58:45 -- common/autotest_common.sh@1142 -- # return 0 01:01:41.026 10:58:45 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 01:01:41.026 10:58:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:01:41.026 10:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:41.026 10:58:45 -- common/autotest_common.sh@10 -- # set +x 01:01:41.026 ************************************ 01:01:41.026 START TEST accel 01:01:41.026 ************************************ 01:01:41.026 10:58:46 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 01:01:41.026 * Looking for test storage... 01:01:41.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 01:01:41.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:41.026 10:58:46 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 01:01:41.026 10:58:46 accel -- accel/accel.sh@82 -- # get_expected_opcs 01:01:41.026 10:58:46 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:01:41.026 10:58:46 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76325 01:01:41.026 10:58:46 accel -- accel/accel.sh@63 -- # waitforlisten 76325 01:01:41.026 10:58:46 accel -- common/autotest_common.sh@829 -- # '[' -z 76325 ']' 01:01:41.026 10:58:46 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:41.026 10:58:46 accel -- common/autotest_common.sh@834 -- # local max_retries=100 01:01:41.026 10:58:46 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 01:01:41.026 10:58:46 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:41.026 10:58:46 accel -- common/autotest_common.sh@838 -- # xtrace_disable 01:01:41.026 10:58:46 accel -- accel/accel.sh@61 -- # build_accel_config 01:01:41.026 10:58:46 accel -- common/autotest_common.sh@10 -- # set +x 01:01:41.026 10:58:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:41.026 10:58:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:41.026 10:58:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:41.026 10:58:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:41.026 10:58:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:41.026 10:58:46 accel -- accel/accel.sh@40 -- # local IFS=, 01:01:41.026 10:58:46 accel -- accel/accel.sh@41 -- # jq -r . 01:01:41.026 [2024-07-22 10:58:46.148365] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:41.026 [2024-07-22 10:58:46.149033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76325 ] 01:01:41.284 [2024-07-22 10:58:46.293360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:41.284 [2024-07-22 10:58:46.394845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@862 -- # return 0 01:01:42.220 10:58:47 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 01:01:42.220 10:58:47 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 01:01:42.220 10:58:47 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 01:01:42.220 10:58:47 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 01:01:42.220 10:58:47 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 01:01:42.220 10:58:47 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@559 -- # xtrace_disable 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@10 -- # set +x 01:01:42.220 10:58:47 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # IFS== 01:01:42.220 10:58:47 accel -- accel/accel.sh@72 -- # read -r opc module 01:01:42.220 10:58:47 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 01:01:42.220 10:58:47 accel -- accel/accel.sh@75 -- # killprocess 76325 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@948 -- # '[' -z 76325 ']' 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@952 -- # kill -0 76325 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@953 -- # uname 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76325 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76325' 01:01:42.220 killing process with pid 76325 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@967 -- # kill 76325 01:01:42.220 10:58:47 accel -- common/autotest_common.sh@972 -- # wait 76325 01:01:42.807 10:58:47 accel -- accel/accel.sh@76 -- # trap - ERR 01:01:42.807 10:58:47 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 01:01:42.807 10:58:47 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:01:42.807 10:58:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:42.807 10:58:47 accel -- common/autotest_common.sh@10 -- # set +x 01:01:42.807 10:58:47 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 01:01:42.807 10:58:47 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 01:01:42.807 10:58:47 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:42.807 10:58:47 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 01:01:42.807 10:58:47 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:42.807 10:58:47 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 01:01:42.807 10:58:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:01:42.807 10:58:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:42.807 10:58:47 accel -- common/autotest_common.sh@10 -- # set +x 01:01:42.807 ************************************ 01:01:42.807 START TEST accel_missing_filename 01:01:42.807 ************************************ 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:42.807 10:58:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 01:01:42.807 10:58:47 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 01:01:42.807 [2024-07-22 10:58:47.811490] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:42.807 [2024-07-22 10:58:47.811566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76395 ] 01:01:42.807 [2024-07-22 10:58:47.947652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:43.064 [2024-07-22 10:58:48.034641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:43.064 [2024-07-22 10:58:48.102061] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:43.064 [2024-07-22 10:58:48.193349] accel_perf.c:1463:main: *ERROR*: ERROR starting application 01:01:43.064 A filename is required. 01:01:43.322 ************************************ 01:01:43.322 END TEST accel_missing_filename 01:01:43.322 ************************************ 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:43.322 01:01:43.322 real 0m0.483s 01:01:43.322 user 0m0.291s 01:01:43.322 sys 0m0.137s 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:43.322 10:58:48 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 01:01:43.322 10:58:48 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:43.322 10:58:48 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:01:43.322 10:58:48 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 01:01:43.322 10:58:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:43.322 10:58:48 accel -- common/autotest_common.sh@10 -- # set +x 01:01:43.322 ************************************ 01:01:43.322 START TEST accel_compress_verify 01:01:43.322 ************************************ 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:43.322 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 01:01:43.322 10:58:48 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 01:01:43.322 [2024-07-22 10:58:48.348657] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:43.322 [2024-07-22 10:58:48.348768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76421 ] 01:01:43.322 [2024-07-22 10:58:48.488923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:43.581 [2024-07-22 10:58:48.571600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:43.581 [2024-07-22 10:58:48.638729] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:43.581 [2024-07-22 10:58:48.729437] accel_perf.c:1463:main: *ERROR*: ERROR starting application 01:01:43.839 01:01:43.839 Compression does not support the verify option, aborting. 01:01:43.839 ************************************ 01:01:43.839 END TEST accel_compress_verify 01:01:43.839 ************************************ 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:43.839 01:01:43.839 real 0m0.491s 01:01:43.839 user 0m0.303s 01:01:43.839 sys 0m0.133s 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:43.839 10:58:48 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:43.839 10:58:48 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@10 -- # set +x 01:01:43.839 ************************************ 01:01:43.839 START TEST accel_wrong_workload 01:01:43.839 ************************************ 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 01:01:43.839 10:58:48 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 01:01:43.839 Unsupported workload type: foobar 01:01:43.839 [2024-07-22 10:58:48.886704] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 01:01:43.839 accel_perf options: 01:01:43.839 [-h help message] 01:01:43.839 [-q queue depth per core] 01:01:43.839 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 01:01:43.839 [-T number of threads per core 01:01:43.839 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 01:01:43.839 [-t time in seconds] 01:01:43.839 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 01:01:43.839 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 01:01:43.839 [-M assign module to the operation, not compatible with accel_assign_opc RPC 01:01:43.839 [-l for compress/decompress workloads, name of uncompressed input file 01:01:43.839 [-S for crc32c workload, use this seed value (default 0) 01:01:43.839 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 01:01:43.839 [-f for fill workload, use this BYTE value (default 255) 01:01:43.839 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 01:01:43.839 [-y verify result if this switch is on] 01:01:43.839 [-a tasks to allocate per core (default: same value as -q)] 01:01:43.839 Can be used to spread operations across a wider range of memory. 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:43.839 01:01:43.839 real 0m0.030s 01:01:43.839 user 0m0.018s 01:01:43.839 sys 0m0.011s 01:01:43.839 ************************************ 01:01:43.839 END TEST accel_wrong_workload 01:01:43.839 ************************************ 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:43.839 10:58:48 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:43.839 10:58:48 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:43.839 10:58:48 accel -- common/autotest_common.sh@10 -- # set +x 01:01:43.839 ************************************ 01:01:43.839 START TEST accel_negative_buffers 01:01:43.839 ************************************ 01:01:43.839 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 01:01:43.839 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 01:01:43.839 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 01:01:43.839 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 01:01:43.840 10:58:48 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 01:01:43.840 -x option must be non-negative. 01:01:43.840 [2024-07-22 10:58:48.960483] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 01:01:43.840 accel_perf options: 01:01:43.840 [-h help message] 01:01:43.840 [-q queue depth per core] 01:01:43.840 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 01:01:43.840 [-T number of threads per core 01:01:43.840 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 01:01:43.840 [-t time in seconds] 01:01:43.840 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 01:01:43.840 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 01:01:43.840 [-M assign module to the operation, not compatible with accel_assign_opc RPC 01:01:43.840 [-l for compress/decompress workloads, name of uncompressed input file 01:01:43.840 [-S for crc32c workload, use this seed value (default 0) 01:01:43.840 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 01:01:43.840 [-f for fill workload, use this BYTE value (default 255) 01:01:43.840 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 01:01:43.840 [-y verify result if this switch is on] 01:01:43.840 [-a tasks to allocate per core (default: same value as -q)] 01:01:43.840 Can be used to spread operations across a wider range of memory. 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:01:43.840 ************************************ 01:01:43.840 END TEST accel_negative_buffers 01:01:43.840 ************************************ 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:01:43.840 01:01:43.840 real 0m0.028s 01:01:43.840 user 0m0.015s 01:01:43.840 sys 0m0.012s 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:43.840 10:58:48 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 01:01:43.840 10:58:49 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:43.840 10:58:49 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 01:01:43.840 10:58:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:01:43.840 10:58:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:43.840 10:58:49 accel -- common/autotest_common.sh@10 -- # set +x 01:01:43.840 ************************************ 01:01:43.840 START TEST accel_crc32c 01:01:43.840 ************************************ 01:01:43.840 10:58:49 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 01:01:43.840 10:58:49 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 01:01:44.098 [2024-07-22 10:58:49.044259] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:44.098 [2024-07-22 10:58:49.044337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76483 ] 01:01:44.098 [2024-07-22 10:58:49.186685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:44.098 [2024-07-22 10:58:49.269594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.356 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:44.357 10:58:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 01:01:45.731 10:58:50 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:45.731 01:01:45.731 real 0m1.493s 01:01:45.731 user 0m1.262s 01:01:45.731 sys 0m0.134s 01:01:45.731 10:58:50 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:45.731 10:58:50 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 01:01:45.731 ************************************ 01:01:45.731 END TEST accel_crc32c 01:01:45.731 ************************************ 01:01:45.731 10:58:50 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:45.731 10:58:50 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 01:01:45.731 10:58:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:01:45.731 10:58:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:45.731 10:58:50 accel -- common/autotest_common.sh@10 -- # set +x 01:01:45.731 ************************************ 01:01:45.731 START TEST accel_crc32c_C2 01:01:45.731 ************************************ 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 01:01:45.731 [2024-07-22 10:58:50.592703] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:45.731 [2024-07-22 10:58:50.592807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76518 ] 01:01:45.731 [2024-07-22 10:58:50.730994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:45.731 [2024-07-22 10:58:50.822272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.731 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:45.732 10:58:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:47.125 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:47.126 01:01:47.126 real 0m1.498s 01:01:47.126 user 0m1.262s 01:01:47.126 sys 0m0.140s 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:47.126 10:58:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 01:01:47.126 ************************************ 01:01:47.126 END TEST accel_crc32c_C2 01:01:47.126 ************************************ 01:01:47.126 10:58:52 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:47.126 10:58:52 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 01:01:47.126 10:58:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:01:47.126 10:58:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:47.126 10:58:52 accel -- common/autotest_common.sh@10 -- # set +x 01:01:47.126 ************************************ 01:01:47.126 START TEST accel_copy 01:01:47.126 ************************************ 01:01:47.126 10:58:52 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 01:01:47.126 10:58:52 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 01:01:47.126 [2024-07-22 10:58:52.144116] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:47.126 [2024-07-22 10:58:52.144204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 01:01:47.126 [2024-07-22 10:58:52.282400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:47.385 [2024-07-22 10:58:52.366633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val=software 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val=32 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val=32 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val=1 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:47.385 10:58:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 01:01:48.758 10:58:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:48.758 01:01:48.758 real 0m1.489s 01:01:48.758 user 0m1.267s 01:01:48.758 sys 0m0.127s 01:01:48.758 10:58:53 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:48.758 10:58:53 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 01:01:48.758 ************************************ 01:01:48.758 END TEST accel_copy 01:01:48.758 ************************************ 01:01:48.758 10:58:53 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:48.758 10:58:53 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 01:01:48.758 10:58:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 01:01:48.758 10:58:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:48.758 10:58:53 accel -- common/autotest_common.sh@10 -- # set +x 01:01:48.758 ************************************ 01:01:48.758 START TEST accel_fill 01:01:48.758 ************************************ 01:01:48.758 10:58:53 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 01:01:48.758 10:58:53 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 01:01:48.758 [2024-07-22 10:58:53.680576] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:48.758 [2024-07-22 10:58:53.680697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76587 ] 01:01:48.758 [2024-07-22 10:58:53.818513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:48.758 [2024-07-22 10:58:53.903706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=software 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=64 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=64 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=1 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.015 10:58:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.947 ************************************ 01:01:49.947 END TEST accel_fill 01:01:49.947 ************************************ 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 01:01:49.947 10:58:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:49.947 01:01:49.947 real 0m1.479s 01:01:49.947 user 0m1.255s 01:01:49.947 sys 0m0.130s 01:01:49.947 10:58:55 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:49.947 10:58:55 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 01:01:50.206 10:58:55 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:50.206 10:58:55 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 01:01:50.206 10:58:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:01:50.206 10:58:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:50.206 10:58:55 accel -- common/autotest_common.sh@10 -- # set +x 01:01:50.206 ************************************ 01:01:50.206 START TEST accel_copy_crc32c 01:01:50.206 ************************************ 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 01:01:50.206 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 01:01:50.206 [2024-07-22 10:58:55.221342] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:50.206 [2024-07-22 10:58:55.221428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76616 ] 01:01:50.206 [2024-07-22 10:58:55.361875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:50.464 [2024-07-22 10:58:55.446608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:50.464 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:50.465 10:58:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 01:01:51.868 ************************************ 01:01:51.868 END TEST accel_copy_crc32c 01:01:51.868 ************************************ 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:51.868 01:01:51.868 real 0m1.461s 01:01:51.868 user 0m0.014s 01:01:51.868 sys 0m0.004s 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:51.868 10:58:56 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 01:01:51.868 10:58:56 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:51.868 10:58:56 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 01:01:51.869 10:58:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:01:51.869 10:58:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:51.869 10:58:56 accel -- common/autotest_common.sh@10 -- # set +x 01:01:51.869 ************************************ 01:01:51.869 START TEST accel_copy_crc32c_C2 01:01:51.869 ************************************ 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 01:01:51.869 [2024-07-22 10:58:56.731044] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:51.869 [2024-07-22 10:58:56.731130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76656 ] 01:01:51.869 [2024-07-22 10:58:56.868805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:51.869 [2024-07-22 10:58:56.935076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:51.869 10:58:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:53.243 ************************************ 01:01:53.243 END TEST accel_copy_crc32c_C2 01:01:53.243 ************************************ 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:53.243 01:01:53.243 real 0m1.425s 01:01:53.243 user 0m1.217s 01:01:53.243 sys 0m0.113s 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:53.243 10:58:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 01:01:53.243 10:58:58 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:53.243 10:58:58 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 01:01:53.243 10:58:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:01:53.243 10:58:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:53.243 10:58:58 accel -- common/autotest_common.sh@10 -- # set +x 01:01:53.243 ************************************ 01:01:53.243 START TEST accel_dualcast 01:01:53.243 ************************************ 01:01:53.243 10:58:58 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 01:01:53.243 10:58:58 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 01:01:53.243 10:58:58 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 01:01:53.244 10:58:58 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 01:01:53.244 [2024-07-22 10:58:58.209695] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:53.244 [2024-07-22 10:58:58.209784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76685 ] 01:01:53.244 [2024-07-22 10:58:58.348618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:53.244 [2024-07-22 10:58:58.427758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.519 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:53.520 10:58:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 01:01:54.452 10:58:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:54.452 ************************************ 01:01:54.452 END TEST accel_dualcast 01:01:54.452 ************************************ 01:01:54.452 01:01:54.452 real 0m1.453s 01:01:54.452 user 0m1.241s 01:01:54.452 sys 0m0.116s 01:01:54.452 10:58:59 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:54.452 10:58:59 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 01:01:54.710 10:58:59 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:54.710 10:58:59 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 01:01:54.710 10:58:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:01:54.710 10:58:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:54.710 10:58:59 accel -- common/autotest_common.sh@10 -- # set +x 01:01:54.710 ************************************ 01:01:54.710 START TEST accel_compare 01:01:54.710 ************************************ 01:01:54.710 10:58:59 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 01:01:54.710 10:58:59 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 01:01:54.710 10:58:59 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 01:01:54.710 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.710 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.710 10:58:59 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 01:01:54.710 10:58:59 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 01:01:54.711 10:58:59 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 01:01:54.711 [2024-07-22 10:58:59.715058] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:54.711 [2024-07-22 10:58:59.715161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76725 ] 01:01:54.711 [2024-07-22 10:58:59.854780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:54.968 [2024-07-22 10:58:59.939334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.968 10:58:59 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:58:59 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:54.969 10:59:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 01:01:56.340 ************************************ 01:01:56.340 END TEST accel_compare 01:01:56.340 ************************************ 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 01:01:56.340 10:59:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:56.340 01:01:56.340 real 0m1.457s 01:01:56.340 user 0m1.248s 01:01:56.340 sys 0m0.116s 01:01:56.340 10:59:01 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:56.340 10:59:01 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 01:01:56.340 10:59:01 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:56.340 10:59:01 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 01:01:56.340 10:59:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 01:01:56.340 10:59:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:56.340 10:59:01 accel -- common/autotest_common.sh@10 -- # set +x 01:01:56.340 ************************************ 01:01:56.340 START TEST accel_xor 01:01:56.340 ************************************ 01:01:56.340 10:59:01 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 01:01:56.340 [2024-07-22 10:59:01.227436] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:56.340 [2024-07-22 10:59:01.228280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76754 ] 01:01:56.340 [2024-07-22 10:59:01.372163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:56.340 [2024-07-22 10:59:01.443618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=2 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.340 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:56.341 10:59:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:57.712 01:01:57.712 real 0m1.453s 01:01:57.712 user 0m1.242s 01:01:57.712 sys 0m0.116s 01:01:57.712 ************************************ 01:01:57.712 END TEST accel_xor 01:01:57.712 ************************************ 01:01:57.712 10:59:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:57.712 10:59:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 01:01:57.712 10:59:02 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:57.712 10:59:02 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 01:01:57.712 10:59:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:01:57.712 10:59:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:57.712 10:59:02 accel -- common/autotest_common.sh@10 -- # set +x 01:01:57.712 ************************************ 01:01:57.712 START TEST accel_xor 01:01:57.712 ************************************ 01:01:57.712 10:59:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 01:01:57.712 10:59:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 01:01:57.712 [2024-07-22 10:59:02.732605] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:57.712 [2024-07-22 10:59:02.732709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76794 ] 01:01:57.712 [2024-07-22 10:59:02.873322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:57.969 [2024-07-22 10:59:02.971329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=3 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.969 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:57.970 10:59:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 01:01:59.342 10:59:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:01:59.342 01:01:59.342 real 0m1.483s 01:01:59.342 user 0m1.278s 01:01:59.342 sys 0m0.113s 01:01:59.342 10:59:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 01:01:59.342 10:59:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 01:01:59.342 ************************************ 01:01:59.342 END TEST accel_xor 01:01:59.342 ************************************ 01:01:59.342 10:59:04 accel -- common/autotest_common.sh@1142 -- # return 0 01:01:59.342 10:59:04 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 01:01:59.342 10:59:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:01:59.342 10:59:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:01:59.342 10:59:04 accel -- common/autotest_common.sh@10 -- # set +x 01:01:59.342 ************************************ 01:01:59.342 START TEST accel_dif_verify 01:01:59.342 ************************************ 01:01:59.342 10:59:04 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 01:01:59.342 10:59:04 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 01:01:59.342 [2024-07-22 10:59:04.264239] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:01:59.342 [2024-07-22 10:59:04.264334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76823 ] 01:01:59.342 [2024-07-22 10:59:04.403419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:59.342 [2024-07-22 10:59:04.500997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:01:59.602 10:59:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 01:02:00.542 10:59:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:00.542 01:02:00.542 real 0m1.468s 01:02:00.542 user 0m1.262s 01:02:00.542 sys 0m0.116s 01:02:00.542 10:59:05 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:00.542 10:59:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 01:02:00.542 ************************************ 01:02:00.542 END TEST accel_dif_verify 01:02:00.542 ************************************ 01:02:00.542 10:59:05 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:00.542 10:59:05 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 01:02:00.542 10:59:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:02:00.542 10:59:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:00.542 10:59:05 accel -- common/autotest_common.sh@10 -- # set +x 01:02:00.800 ************************************ 01:02:00.800 START TEST accel_dif_generate 01:02:00.800 ************************************ 01:02:00.800 10:59:05 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 01:02:00.800 10:59:05 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 01:02:00.800 [2024-07-22 10:59:05.780358] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:00.800 [2024-07-22 10:59:05.780451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76863 ] 01:02:00.800 [2024-07-22 10:59:05.922068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:00.800 [2024-07-22 10:59:05.993908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 01:02:01.058 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.059 10:59:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 01:02:01.993 10:59:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:01.993 01:02:01.993 real 0m1.438s 01:02:01.993 user 0m1.229s 01:02:01.993 sys 0m0.120s 01:02:01.993 10:59:07 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:01.993 10:59:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 01:02:01.993 ************************************ 01:02:01.993 END TEST accel_dif_generate 01:02:01.993 ************************************ 01:02:02.253 10:59:07 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:02.253 10:59:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 01:02:02.253 10:59:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 01:02:02.253 10:59:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:02.253 10:59:07 accel -- common/autotest_common.sh@10 -- # set +x 01:02:02.253 ************************************ 01:02:02.253 START TEST accel_dif_generate_copy 01:02:02.253 ************************************ 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 01:02:02.253 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 01:02:02.253 [2024-07-22 10:59:07.266632] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:02.253 [2024-07-22 10:59:07.266724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76892 ] 01:02:02.253 [2024-07-22 10:59:07.404455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:02.511 [2024-07-22 10:59:07.467576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.511 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:02.512 10:59:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:03.885 01:02:03.885 real 0m1.442s 01:02:03.885 user 0m1.233s 01:02:03.885 sys 0m0.120s 01:02:03.885 ************************************ 01:02:03.885 END TEST accel_dif_generate_copy 01:02:03.885 ************************************ 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:03.885 10:59:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 01:02:03.885 10:59:08 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:03.885 10:59:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 01:02:03.885 10:59:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:03.885 10:59:08 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 01:02:03.885 10:59:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:03.885 10:59:08 accel -- common/autotest_common.sh@10 -- # set +x 01:02:03.885 ************************************ 01:02:03.885 START TEST accel_comp 01:02:03.885 ************************************ 01:02:03.885 10:59:08 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 01:02:03.885 10:59:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 01:02:03.885 [2024-07-22 10:59:08.767242] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:03.885 [2024-07-22 10:59:08.767333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76921 ] 01:02:03.885 [2024-07-22 10:59:08.903261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:03.885 [2024-07-22 10:59:08.966336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=software 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 01:02:03.885 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=1 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val=No 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:03.886 10:59:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:05.259 10:59:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:05.259 10:59:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.259 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:05.259 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:05.259 10:59:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:05.259 10:59:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.259 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:05.260 ************************************ 01:02:05.260 END TEST accel_comp 01:02:05.260 ************************************ 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 01:02:05.260 10:59:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:05.260 01:02:05.260 real 0m1.419s 01:02:05.260 user 0m1.219s 01:02:05.260 sys 0m0.109s 01:02:05.260 10:59:10 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:05.260 10:59:10 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 01:02:05.260 10:59:10 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:05.260 10:59:10 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:02:05.260 10:59:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 01:02:05.260 10:59:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:05.260 10:59:10 accel -- common/autotest_common.sh@10 -- # set +x 01:02:05.260 ************************************ 01:02:05.260 START TEST accel_decomp 01:02:05.260 ************************************ 01:02:05.260 10:59:10 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 01:02:05.260 10:59:10 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 01:02:05.260 [2024-07-22 10:59:10.237717] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:05.260 [2024-07-22 10:59:10.237821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76961 ] 01:02:05.260 [2024-07-22 10:59:10.367181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:05.260 [2024-07-22 10:59:10.428447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:05.518 10:59:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:02:06.451 10:59:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:06.451 01:02:06.451 real 0m1.422s 01:02:06.451 user 0m1.225s 01:02:06.451 sys 0m0.107s 01:02:06.451 10:59:11 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:06.451 10:59:11 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 01:02:06.451 ************************************ 01:02:06.451 END TEST accel_decomp 01:02:06.451 ************************************ 01:02:06.709 10:59:11 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:06.709 10:59:11 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:02:06.709 10:59:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 01:02:06.709 10:59:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:06.709 10:59:11 accel -- common/autotest_common.sh@10 -- # set +x 01:02:06.709 ************************************ 01:02:06.709 START TEST accel_decomp_full 01:02:06.709 ************************************ 01:02:06.709 10:59:11 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 01:02:06.709 10:59:11 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 01:02:06.709 [2024-07-22 10:59:11.713228] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:06.709 [2024-07-22 10:59:11.713325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76990 ] 01:02:06.709 [2024-07-22 10:59:11.847984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:06.968 [2024-07-22 10:59:11.919680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:06.968 10:59:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:02:08.341 10:59:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:08.341 01:02:08.341 real 0m1.445s 01:02:08.341 user 0m1.243s 01:02:08.341 sys 0m0.111s 01:02:08.341 10:59:13 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:08.341 ************************************ 01:02:08.341 END TEST accel_decomp_full 01:02:08.341 ************************************ 01:02:08.341 10:59:13 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 01:02:08.341 10:59:13 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:08.341 10:59:13 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:02:08.341 10:59:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 01:02:08.341 10:59:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:08.341 10:59:13 accel -- common/autotest_common.sh@10 -- # set +x 01:02:08.341 ************************************ 01:02:08.341 START TEST accel_decomp_mcore 01:02:08.341 ************************************ 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 01:02:08.341 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 01:02:08.342 [2024-07-22 10:59:13.202059] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:08.342 [2024-07-22 10:59:13.202177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77030 ] 01:02:08.342 [2024-07-22 10:59:13.339883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:08.342 [2024-07-22 10:59:13.422210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:08.342 [2024-07-22 10:59:13.422358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:08.342 [2024-07-22 10:59:13.422884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:08.342 [2024-07-22 10:59:13.422896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:08.342 10:59:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:09.725 01:02:09.725 real 0m1.515s 01:02:09.725 user 0m4.794s 01:02:09.725 sys 0m0.138s 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:09.725 10:59:14 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 01:02:09.725 ************************************ 01:02:09.725 END TEST accel_decomp_mcore 01:02:09.725 ************************************ 01:02:09.725 10:59:14 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:09.725 10:59:14 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:02:09.725 10:59:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 01:02:09.725 10:59:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:09.725 10:59:14 accel -- common/autotest_common.sh@10 -- # set +x 01:02:09.725 ************************************ 01:02:09.725 START TEST accel_decomp_full_mcore 01:02:09.725 ************************************ 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 01:02:09.725 10:59:14 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 01:02:09.725 [2024-07-22 10:59:14.768735] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:09.725 [2024-07-22 10:59:14.768801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77062 ] 01:02:09.725 [2024-07-22 10:59:14.902914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:09.984 [2024-07-22 10:59:14.987979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:09.984 [2024-07-22 10:59:14.988115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:09.984 [2024-07-22 10:59:14.988535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:09.984 [2024-07-22 10:59:14.988573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:09.984 10:59:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:11.376 01:02:11.376 real 0m1.566s 01:02:11.376 user 0m4.963s 01:02:11.376 sys 0m0.162s 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:11.376 10:59:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 01:02:11.376 ************************************ 01:02:11.376 END TEST accel_decomp_full_mcore 01:02:11.376 ************************************ 01:02:11.376 10:59:16 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:11.376 10:59:16 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:02:11.376 10:59:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 01:02:11.376 10:59:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:11.376 10:59:16 accel -- common/autotest_common.sh@10 -- # set +x 01:02:11.376 ************************************ 01:02:11.376 START TEST accel_decomp_mthread 01:02:11.376 ************************************ 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 01:02:11.376 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 01:02:11.376 [2024-07-22 10:59:16.382059] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:11.376 [2024-07-22 10:59:16.382182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77105 ] 01:02:11.376 [2024-07-22 10:59:16.516712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:11.635 [2024-07-22 10:59:16.597220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:11.635 10:59:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.061 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:13.062 01:02:13.062 real 0m1.531s 01:02:13.062 user 0m1.291s 01:02:13.062 sys 0m0.144s 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:13.062 10:59:17 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 01:02:13.062 ************************************ 01:02:13.062 END TEST accel_decomp_mthread 01:02:13.062 ************************************ 01:02:13.062 10:59:17 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:13.062 10:59:17 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:02:13.062 10:59:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 01:02:13.062 10:59:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:13.062 10:59:17 accel -- common/autotest_common.sh@10 -- # set +x 01:02:13.062 ************************************ 01:02:13.062 START TEST accel_decomp_full_mthread 01:02:13.062 ************************************ 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 01:02:13.062 10:59:17 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 01:02:13.062 [2024-07-22 10:59:17.967416] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:13.062 [2024-07-22 10:59:17.967501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77135 ] 01:02:13.062 [2024-07-22 10:59:18.104609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:13.062 [2024-07-22 10:59:18.192840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:13.321 10:59:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 01:02:14.696 ************************************ 01:02:14.696 END TEST accel_decomp_full_mthread 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:02:14.696 01:02:14.696 real 0m1.582s 01:02:14.696 user 0m1.329s 01:02:14.696 sys 0m0.155s 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:14.696 10:59:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 01:02:14.696 ************************************ 01:02:14.696 10:59:19 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:14.696 10:59:19 accel -- accel/accel.sh@124 -- # [[ n == y ]] 01:02:14.696 10:59:19 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 01:02:14.696 10:59:19 accel -- accel/accel.sh@137 -- # build_accel_config 01:02:14.696 10:59:19 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:02:14.696 10:59:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:14.696 10:59:19 accel -- common/autotest_common.sh@10 -- # set +x 01:02:14.696 10:59:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 01:02:14.696 10:59:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 01:02:14.696 10:59:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 01:02:14.696 10:59:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 01:02:14.696 10:59:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 01:02:14.696 10:59:19 accel -- accel/accel.sh@40 -- # local IFS=, 01:02:14.696 10:59:19 accel -- accel/accel.sh@41 -- # jq -r . 01:02:14.696 ************************************ 01:02:14.696 START TEST accel_dif_functional_tests 01:02:14.696 ************************************ 01:02:14.696 10:59:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 01:02:14.696 [2024-07-22 10:59:19.634097] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:14.696 [2024-07-22 10:59:19.634186] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77175 ] 01:02:14.696 [2024-07-22 10:59:19.776444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:02:14.696 [2024-07-22 10:59:19.860495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:14.696 [2024-07-22 10:59:19.860654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:14.696 [2024-07-22 10:59:19.860664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:14.955 01:02:14.955 01:02:14.955 CUnit - A unit testing framework for C - Version 2.1-3 01:02:14.955 http://cunit.sourceforge.net/ 01:02:14.955 01:02:14.955 01:02:14.955 Suite: accel_dif 01:02:14.955 Test: verify: DIF generated, GUARD check ...passed 01:02:14.955 Test: verify: DIF generated, APPTAG check ...passed 01:02:14.955 Test: verify: DIF generated, REFTAG check ...passed 01:02:14.955 Test: verify: DIF not generated, GUARD check ...passed 01:02:14.955 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 10:59:19.961733] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 01:02:14.955 [2024-07-22 10:59:19.961835] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 01:02:14.955 passed 01:02:14.955 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 10:59:19.961872] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 01:02:14.955 passed 01:02:14.955 Test: verify: APPTAG correct, APPTAG check ...passed 01:02:14.955 Test: verify: APPTAG incorrect, APPTAG check ...passed 01:02:14.955 Test: verify: APPTAG incorrect, no APPTAG check ...passed 01:02:14.955 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 01:02:14.955 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-22 10:59:19.962158] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 01:02:14.955 passed 01:02:14.955 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 10:59:19.962319] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 01:02:14.955 passed 01:02:14.955 Test: verify copy: DIF generated, GUARD check ...passed 01:02:14.955 Test: verify copy: DIF generated, APPTAG check ...passed 01:02:14.955 Test: verify copy: DIF generated, REFTAG check ...passed 01:02:14.955 Test: verify copy: DIF not generated, GUARD check ...passed 01:02:14.955 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 10:59:19.962846] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 01:02:14.955 [2024-07-22 10:59:19.962904] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 01:02:14.955 passed 01:02:14.955 Test: verify copy: DIF not generated, REFTAG check ...passed 01:02:14.955 Test: generate copy: DIF generated, GUARD check ...[2024-07-22 10:59:19.963060] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 01:02:14.955 passed 01:02:14.955 Test: generate copy: DIF generated, APTTAG check ...passed 01:02:14.955 Test: generate copy: DIF generated, REFTAG check ...passed 01:02:14.955 Test: generate copy: DIF generated, no GUARD check flag set ...passed 01:02:14.955 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 01:02:14.955 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 01:02:14.955 Test: generate copy: iovecs-len validate ...[2024-07-22 10:59:19.963710] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 01:02:14.955 passed 01:02:14.955 Test: generate copy: buffer alignment validate ...passed 01:02:14.955 01:02:14.955 Run Summary: Type Total Ran Passed Failed Inactive 01:02:14.955 suites 1 1 n/a 0 0 01:02:14.955 tests 26 26 26 0 0 01:02:14.955 asserts 115 115 115 0 n/a 01:02:14.955 01:02:14.955 Elapsed time = 0.004 seconds 01:02:15.214 01:02:15.214 real 0m0.598s 01:02:15.214 user 0m0.803s 01:02:15.214 sys 0m0.183s 01:02:15.214 10:59:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:15.214 ************************************ 01:02:15.214 END TEST accel_dif_functional_tests 01:02:15.214 ************************************ 01:02:15.214 10:59:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 01:02:15.214 10:59:20 accel -- common/autotest_common.sh@1142 -- # return 0 01:02:15.214 ************************************ 01:02:15.214 END TEST accel 01:02:15.214 ************************************ 01:02:15.214 01:02:15.214 real 0m34.220s 01:02:15.214 user 0m36.129s 01:02:15.214 sys 0m4.226s 01:02:15.214 10:59:20 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:15.214 10:59:20 accel -- common/autotest_common.sh@10 -- # set +x 01:02:15.214 10:59:20 -- common/autotest_common.sh@1142 -- # return 0 01:02:15.214 10:59:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 01:02:15.214 10:59:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:15.214 10:59:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:15.214 10:59:20 -- common/autotest_common.sh@10 -- # set +x 01:02:15.214 ************************************ 01:02:15.214 START TEST accel_rpc 01:02:15.214 ************************************ 01:02:15.214 10:59:20 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 01:02:15.214 * Looking for test storage... 01:02:15.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 01:02:15.214 10:59:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:02:15.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:15.214 10:59:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77240 01:02:15.214 10:59:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77240 01:02:15.214 10:59:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 01:02:15.214 10:59:20 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 77240 ']' 01:02:15.214 10:59:20 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:15.214 10:59:20 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:15.214 10:59:20 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:15.214 10:59:20 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:15.214 10:59:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:15.472 [2024-07-22 10:59:20.426206] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:15.472 [2024-07-22 10:59:20.426314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77240 ] 01:02:15.472 [2024-07-22 10:59:20.568167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:15.472 [2024-07-22 10:59:20.661311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:16.406 10:59:21 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:16.406 10:59:21 accel_rpc -- common/autotest_common.sh@862 -- # return 0 01:02:16.406 10:59:21 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 01:02:16.406 10:59:21 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 01:02:16.406 10:59:21 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 01:02:16.406 10:59:21 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 01:02:16.406 10:59:21 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 01:02:16.406 10:59:21 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:16.406 10:59:21 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:16.406 10:59:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:16.406 ************************************ 01:02:16.406 START TEST accel_assign_opcode 01:02:16.406 ************************************ 01:02:16.406 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 01:02:16.406 10:59:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 01:02:16.406 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:16.406 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:02:16.406 [2024-07-22 10:59:21.482111] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 01:02:16.406 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:16.406 10:59:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 01:02:16.406 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:16.407 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:02:16.407 [2024-07-22 10:59:21.490130] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 01:02:16.407 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:16.407 10:59:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 01:02:16.407 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:16.407 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:16.665 software 01:02:16.665 ************************************ 01:02:16.665 END TEST accel_assign_opcode 01:02:16.665 ************************************ 01:02:16.665 01:02:16.665 real 0m0.343s 01:02:16.665 user 0m0.061s 01:02:16.665 sys 0m0.009s 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:16.665 10:59:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 01:02:16.665 10:59:21 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 01:02:16.665 10:59:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77240 01:02:16.665 10:59:21 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 77240 ']' 01:02:16.665 10:59:21 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 77240 01:02:16.665 10:59:21 accel_rpc -- common/autotest_common.sh@953 -- # uname 01:02:16.665 10:59:21 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:16.665 10:59:21 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77240 01:02:16.923 killing process with pid 77240 01:02:16.923 10:59:21 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:16.923 10:59:21 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:16.923 10:59:21 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77240' 01:02:16.923 10:59:21 accel_rpc -- common/autotest_common.sh@967 -- # kill 77240 01:02:16.923 10:59:21 accel_rpc -- common/autotest_common.sh@972 -- # wait 77240 01:02:17.190 ************************************ 01:02:17.190 END TEST accel_rpc 01:02:17.190 ************************************ 01:02:17.190 01:02:17.190 real 0m2.047s 01:02:17.190 user 0m2.159s 01:02:17.190 sys 0m0.507s 01:02:17.190 10:59:22 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:17.190 10:59:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 01:02:17.190 10:59:22 -- common/autotest_common.sh@1142 -- # return 0 01:02:17.190 10:59:22 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:02:17.190 10:59:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:17.190 10:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:17.190 10:59:22 -- common/autotest_common.sh@10 -- # set +x 01:02:17.190 ************************************ 01:02:17.190 START TEST app_cmdline 01:02:17.190 ************************************ 01:02:17.190 10:59:22 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:02:17.448 * Looking for test storage... 01:02:17.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:02:17.448 10:59:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:02:17.448 10:59:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77351 01:02:17.448 10:59:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:02:17.448 10:59:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77351 01:02:17.448 10:59:22 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 77351 ']' 01:02:17.448 10:59:22 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:17.448 10:59:22 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:17.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:17.448 10:59:22 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:17.448 10:59:22 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:17.448 10:59:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:02:17.448 [2024-07-22 10:59:22.528314] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:17.448 [2024-07-22 10:59:22.529151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77351 ] 01:02:17.705 [2024-07-22 10:59:22.670538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:17.705 [2024-07-22 10:59:22.761328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:18.637 10:59:23 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:18.637 10:59:23 app_cmdline -- common/autotest_common.sh@862 -- # return 0 01:02:18.637 10:59:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:02:18.637 { 01:02:18.637 "fields": { 01:02:18.637 "commit": "8fb860b73", 01:02:18.637 "major": 24, 01:02:18.637 "minor": 9, 01:02:18.637 "patch": 0, 01:02:18.637 "suffix": "-pre" 01:02:18.637 }, 01:02:18.637 "version": "SPDK v24.09-pre git sha1 8fb860b73" 01:02:18.637 } 01:02:18.637 10:59:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:02:18.637 10:59:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:02:18.637 10:59:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:02:18.637 10:59:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:02:18.895 10:59:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:18.895 10:59:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:02:18.895 10:59:23 app_cmdline -- app/cmdline.sh@26 -- # sort 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:18.895 10:59:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:02:18.895 10:59:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:02:18.895 10:59:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:02:18.895 10:59:23 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:02:19.153 2024/07/22 10:59:24 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 01:02:19.153 request: 01:02:19.153 { 01:02:19.153 "method": "env_dpdk_get_mem_stats", 01:02:19.153 "params": {} 01:02:19.153 } 01:02:19.153 Got JSON-RPC error response 01:02:19.153 GoRPCClient: error on JSON-RPC call 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@651 -- # es=1 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:02:19.153 10:59:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77351 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 77351 ']' 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 77351 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@953 -- # uname 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77351 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:19.153 killing process with pid 77351 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77351' 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@967 -- # kill 77351 01:02:19.153 10:59:24 app_cmdline -- common/autotest_common.sh@972 -- # wait 77351 01:02:19.733 01:02:19.733 real 0m2.274s 01:02:19.733 user 0m2.831s 01:02:19.733 sys 0m0.557s 01:02:19.733 10:59:24 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:19.733 10:59:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:02:19.733 ************************************ 01:02:19.733 END TEST app_cmdline 01:02:19.733 ************************************ 01:02:19.733 10:59:24 -- common/autotest_common.sh@1142 -- # return 0 01:02:19.733 10:59:24 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:02:19.733 10:59:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:02:19.733 10:59:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:19.733 10:59:24 -- common/autotest_common.sh@10 -- # set +x 01:02:19.733 ************************************ 01:02:19.733 START TEST version 01:02:19.733 ************************************ 01:02:19.733 10:59:24 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:02:19.733 * Looking for test storage... 01:02:19.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:02:19.733 10:59:24 version -- app/version.sh@17 -- # get_header_version major 01:02:19.733 10:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # cut -f2 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # tr -d '"' 01:02:19.733 10:59:24 version -- app/version.sh@17 -- # major=24 01:02:19.733 10:59:24 version -- app/version.sh@18 -- # get_header_version minor 01:02:19.733 10:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # cut -f2 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # tr -d '"' 01:02:19.733 10:59:24 version -- app/version.sh@18 -- # minor=9 01:02:19.733 10:59:24 version -- app/version.sh@19 -- # get_header_version patch 01:02:19.733 10:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # cut -f2 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # tr -d '"' 01:02:19.733 10:59:24 version -- app/version.sh@19 -- # patch=0 01:02:19.733 10:59:24 version -- app/version.sh@20 -- # get_header_version suffix 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # cut -f2 01:02:19.733 10:59:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:02:19.733 10:59:24 version -- app/version.sh@14 -- # tr -d '"' 01:02:19.733 10:59:24 version -- app/version.sh@20 -- # suffix=-pre 01:02:19.733 10:59:24 version -- app/version.sh@22 -- # version=24.9 01:02:19.733 10:59:24 version -- app/version.sh@25 -- # (( patch != 0 )) 01:02:19.733 10:59:24 version -- app/version.sh@28 -- # version=24.9rc0 01:02:19.733 10:59:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:02:19.733 10:59:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:02:19.733 10:59:24 version -- app/version.sh@30 -- # py_version=24.9rc0 01:02:19.733 10:59:24 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 01:02:19.733 01:02:19.734 real 0m0.155s 01:02:19.734 user 0m0.097s 01:02:19.734 sys 0m0.088s 01:02:19.734 10:59:24 version -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:19.734 ************************************ 01:02:19.734 END TEST version 01:02:19.734 ************************************ 01:02:19.734 10:59:24 version -- common/autotest_common.sh@10 -- # set +x 01:02:19.734 10:59:24 -- common/autotest_common.sh@1142 -- # return 0 01:02:19.734 10:59:24 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@198 -- # uname -s 01:02:19.734 10:59:24 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 01:02:19.734 10:59:24 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 01:02:19.734 10:59:24 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 01:02:19.734 10:59:24 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@260 -- # timing_exit lib 01:02:19.734 10:59:24 -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:19.734 10:59:24 -- common/autotest_common.sh@10 -- # set +x 01:02:19.734 10:59:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@280 -- # export NET_TYPE 01:02:19.734 10:59:24 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 01:02:19.734 10:59:24 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:02:19.734 10:59:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:19.734 10:59:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:19.734 10:59:24 -- common/autotest_common.sh@10 -- # set +x 01:02:19.734 ************************************ 01:02:19.734 START TEST nvmf_tcp 01:02:19.734 ************************************ 01:02:19.734 10:59:24 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:02:19.993 * Looking for test storage... 01:02:19.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:19.993 10:59:25 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:19.993 10:59:25 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:19.993 10:59:25 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:19.993 10:59:25 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.993 10:59:25 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.993 10:59:25 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.993 10:59:25 nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:02:19.993 10:59:25 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 01:02:19.993 10:59:25 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:19.993 10:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 01:02:19.993 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 01:02:19.993 10:59:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:19.993 10:59:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:19.993 10:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:19.993 ************************************ 01:02:19.993 START TEST nvmf_example 01:02:19.993 ************************************ 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 01:02:19.993 * Looking for test storage... 01:02:19.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:19.993 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:02:19.994 Cannot find device "nvmf_init_br" 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 01:02:19.994 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:02:20.252 Cannot find device "nvmf_tgt_br" 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:02:20.252 Cannot find device "nvmf_tgt_br2" 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:02:20.252 Cannot find device "nvmf_init_br" 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:02:20.252 Cannot find device "nvmf_tgt_br" 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:02:20.252 Cannot find device "nvmf_tgt_br2" 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:02:20.252 Cannot find device "nvmf_br" 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:02:20.252 Cannot find device "nvmf_init_if" 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:20.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:20.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:20.252 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:02:20.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:20.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 01:02:20.511 01:02:20.511 --- 10.0.0.2 ping statistics --- 01:02:20.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:20.511 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:02:20.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:20.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 01:02:20.511 01:02:20.511 --- 10.0.0.3 ping statistics --- 01:02:20.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:20.511 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:20.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:20.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:02:20.511 01:02:20.511 --- 10.0.0.1 ping statistics --- 01:02:20.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:20.511 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=77703 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 77703 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 77703 ']' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:20.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:20.511 10:59:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:02:21.885 10:59:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:02:31.907 Initializing NVMe Controllers 01:02:31.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:02:31.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:02:31.907 Initialization complete. Launching workers. 01:02:31.907 ======================================================== 01:02:31.907 Latency(us) 01:02:31.907 Device Information : IOPS MiB/s Average min max 01:02:31.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14596.89 57.02 4385.67 822.20 22076.36 01:02:31.907 ======================================================== 01:02:31.907 Total : 14596.89 57.02 4385.67 822.20 22076.36 01:02:31.907 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 01:02:31.907 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:02:31.907 rmmod nvme_tcp 01:02:32.166 rmmod nvme_fabrics 01:02:32.166 rmmod nvme_keyring 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 77703 ']' 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 77703 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 77703 ']' 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 77703 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77703 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 01:02:32.166 killing process with pid 77703 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77703' 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 77703 01:02:32.166 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 77703 01:02:32.425 nvmf threads initialize successfully 01:02:32.425 bdev subsystem init successfully 01:02:32.425 created a nvmf target service 01:02:32.425 create targets's poll groups done 01:02:32.425 all subsystems of target started 01:02:32.425 nvmf target is running 01:02:32.425 all subsystems of target stopped 01:02:32.425 destroy targets's poll groups done 01:02:32.425 destroyed the nvmf target service 01:02:32.425 bdev subsystem finish successfully 01:02:32.425 nvmf threads destroy successfully 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:32.425 10:59:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:02:32.426 10:59:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 01:02:32.426 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:32.426 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:32.426 01:02:32.426 real 0m12.399s 01:02:32.426 user 0m44.631s 01:02:32.426 sys 0m2.039s 01:02:32.426 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:32.426 10:59:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:02:32.426 ************************************ 01:02:32.426 END TEST nvmf_example 01:02:32.426 ************************************ 01:02:32.426 10:59:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:02:32.426 10:59:37 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 01:02:32.426 10:59:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:32.426 10:59:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:32.426 10:59:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:32.426 ************************************ 01:02:32.426 START TEST nvmf_filesystem 01:02:32.426 ************************************ 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 01:02:32.426 * Looking for test storage... 01:02:32.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 01:02:32.426 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 01:02:32.427 #define SPDK_CONFIG_H 01:02:32.427 #define SPDK_CONFIG_APPS 1 01:02:32.427 #define SPDK_CONFIG_ARCH native 01:02:32.427 #undef SPDK_CONFIG_ASAN 01:02:32.427 #define SPDK_CONFIG_AVAHI 1 01:02:32.427 #undef SPDK_CONFIG_CET 01:02:32.427 #define SPDK_CONFIG_COVERAGE 1 01:02:32.427 #define SPDK_CONFIG_CROSS_PREFIX 01:02:32.427 #undef SPDK_CONFIG_CRYPTO 01:02:32.427 #undef SPDK_CONFIG_CRYPTO_MLX5 01:02:32.427 #undef SPDK_CONFIG_CUSTOMOCF 01:02:32.427 #undef SPDK_CONFIG_DAOS 01:02:32.427 #define SPDK_CONFIG_DAOS_DIR 01:02:32.427 #define SPDK_CONFIG_DEBUG 1 01:02:32.427 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 01:02:32.427 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 01:02:32.427 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 01:02:32.427 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 01:02:32.427 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 01:02:32.427 #undef SPDK_CONFIG_DPDK_UADK 01:02:32.427 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:02:32.427 #define SPDK_CONFIG_EXAMPLES 1 01:02:32.427 #undef SPDK_CONFIG_FC 01:02:32.427 #define SPDK_CONFIG_FC_PATH 01:02:32.427 #define SPDK_CONFIG_FIO_PLUGIN 1 01:02:32.427 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 01:02:32.427 #undef SPDK_CONFIG_FUSE 01:02:32.427 #undef SPDK_CONFIG_FUZZER 01:02:32.427 #define SPDK_CONFIG_FUZZER_LIB 01:02:32.427 #define SPDK_CONFIG_GOLANG 1 01:02:32.427 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 01:02:32.427 #define SPDK_CONFIG_HAVE_EVP_MAC 1 01:02:32.427 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 01:02:32.427 #define SPDK_CONFIG_HAVE_KEYUTILS 1 01:02:32.427 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 01:02:32.427 #undef SPDK_CONFIG_HAVE_LIBBSD 01:02:32.427 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 01:02:32.427 #define SPDK_CONFIG_IDXD 1 01:02:32.427 #define SPDK_CONFIG_IDXD_KERNEL 1 01:02:32.427 #undef SPDK_CONFIG_IPSEC_MB 01:02:32.427 #define SPDK_CONFIG_IPSEC_MB_DIR 01:02:32.427 #define SPDK_CONFIG_ISAL 1 01:02:32.427 #define SPDK_CONFIG_ISAL_CRYPTO 1 01:02:32.427 #define SPDK_CONFIG_ISCSI_INITIATOR 1 01:02:32.427 #define SPDK_CONFIG_LIBDIR 01:02:32.427 #undef SPDK_CONFIG_LTO 01:02:32.427 #define SPDK_CONFIG_MAX_LCORES 128 01:02:32.427 #define SPDK_CONFIG_NVME_CUSE 1 01:02:32.427 #undef SPDK_CONFIG_OCF 01:02:32.427 #define SPDK_CONFIG_OCF_PATH 01:02:32.427 #define SPDK_CONFIG_OPENSSL_PATH 01:02:32.427 #undef SPDK_CONFIG_PGO_CAPTURE 01:02:32.427 #define SPDK_CONFIG_PGO_DIR 01:02:32.427 #undef SPDK_CONFIG_PGO_USE 01:02:32.427 #define SPDK_CONFIG_PREFIX /usr/local 01:02:32.427 #undef SPDK_CONFIG_RAID5F 01:02:32.427 #undef SPDK_CONFIG_RBD 01:02:32.427 #define SPDK_CONFIG_RDMA 1 01:02:32.427 #define SPDK_CONFIG_RDMA_PROV verbs 01:02:32.427 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 01:02:32.427 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 01:02:32.427 #define SPDK_CONFIG_RDMA_SET_TOS 1 01:02:32.427 #define SPDK_CONFIG_SHARED 1 01:02:32.427 #undef SPDK_CONFIG_SMA 01:02:32.427 #define SPDK_CONFIG_TESTS 1 01:02:32.427 #undef SPDK_CONFIG_TSAN 01:02:32.427 #define SPDK_CONFIG_UBLK 1 01:02:32.427 #define SPDK_CONFIG_UBSAN 1 01:02:32.427 #undef SPDK_CONFIG_UNIT_TESTS 01:02:32.427 #undef SPDK_CONFIG_URING 01:02:32.427 #define SPDK_CONFIG_URING_PATH 01:02:32.427 #undef SPDK_CONFIG_URING_ZNS 01:02:32.427 #define SPDK_CONFIG_USDT 1 01:02:32.427 #undef SPDK_CONFIG_VBDEV_COMPRESS 01:02:32.427 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 01:02:32.427 #undef SPDK_CONFIG_VFIO_USER 01:02:32.427 #define SPDK_CONFIG_VFIO_USER_DIR 01:02:32.427 #define SPDK_CONFIG_VHOST 1 01:02:32.427 #define SPDK_CONFIG_VIRTIO 1 01:02:32.427 #undef SPDK_CONFIG_VTUNE 01:02:32.427 #define SPDK_CONFIG_VTUNE_DIR 01:02:32.427 #define SPDK_CONFIG_WERROR 1 01:02:32.427 #define SPDK_CONFIG_WPDK_DIR 01:02:32.427 #undef SPDK_CONFIG_XNVME 01:02:32.427 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 01:02:32.427 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 01:02:32.688 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 77956 ]] 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 77956 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ENaqoY 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.ENaqoY/tests/target /tmp/spdk.ENaqoY 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 01:02:32.689 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13044654080 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6001614848 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13044654080 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6001614848 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=96058728448 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3644051456 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 01:02:32.690 * Looking for test storage... 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13044654080 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:32.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.690 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:02:32.691 Cannot find device "nvmf_tgt_br" 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:02:32.691 Cannot find device "nvmf_tgt_br2" 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:02:32.691 Cannot find device "nvmf_tgt_br" 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:02:32.691 Cannot find device "nvmf_tgt_br2" 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:32.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:32.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:32.691 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:32.950 10:59:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:02:32.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:32.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 01:02:32.950 01:02:32.950 --- 10.0.0.2 ping statistics --- 01:02:32.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:32.950 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:02:32.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:32.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 01:02:32.950 01:02:32.950 --- 10.0.0.3 ping statistics --- 01:02:32.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:32.950 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:32.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:32.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:02:32.950 01:02:32.950 --- 10.0.0.1 ping statistics --- 01:02:32.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:32.950 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 01:02:32.950 ************************************ 01:02:32.950 START TEST nvmf_filesystem_no_in_capsule 01:02:32.950 ************************************ 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:02:32.950 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78111 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78111 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 78111 ']' 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:32.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:32.951 10:59:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:33.208 [2024-07-22 10:59:38.185791] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:33.208 [2024-07-22 10:59:38.185880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:33.208 [2024-07-22 10:59:38.327298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:33.467 [2024-07-22 10:59:38.433664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:33.467 [2024-07-22 10:59:38.433709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:33.467 [2024-07-22 10:59:38.433735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:33.467 [2024-07-22 10:59:38.433743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:33.467 [2024-07-22 10:59:38.433750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:33.467 [2024-07-22 10:59:38.433917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:33.467 [2024-07-22 10:59:38.435115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:33.467 [2024-07-22 10:59:38.435194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:33.467 [2024-07-22 10:59:38.435197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:34.031 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:34.289 [2024-07-22 10:59:39.244525] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:34.289 Malloc1 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:34.289 [2024-07-22 10:59:39.449030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 01:02:34.289 { 01:02:34.289 "aliases": [ 01:02:34.289 "9cc76644-9b2b-49ca-bbfe-2ff1493e56de" 01:02:34.289 ], 01:02:34.289 "assigned_rate_limits": { 01:02:34.289 "r_mbytes_per_sec": 0, 01:02:34.289 "rw_ios_per_sec": 0, 01:02:34.289 "rw_mbytes_per_sec": 0, 01:02:34.289 "w_mbytes_per_sec": 0 01:02:34.289 }, 01:02:34.289 "block_size": 512, 01:02:34.289 "claim_type": "exclusive_write", 01:02:34.289 "claimed": true, 01:02:34.289 "driver_specific": {}, 01:02:34.289 "memory_domains": [ 01:02:34.289 { 01:02:34.289 "dma_device_id": "system", 01:02:34.289 "dma_device_type": 1 01:02:34.289 }, 01:02:34.289 { 01:02:34.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:34.289 "dma_device_type": 2 01:02:34.289 } 01:02:34.289 ], 01:02:34.289 "name": "Malloc1", 01:02:34.289 "num_blocks": 1048576, 01:02:34.289 "product_name": "Malloc disk", 01:02:34.289 "supported_io_types": { 01:02:34.289 "abort": true, 01:02:34.289 "compare": false, 01:02:34.289 "compare_and_write": false, 01:02:34.289 "copy": true, 01:02:34.289 "flush": true, 01:02:34.289 "get_zone_info": false, 01:02:34.289 "nvme_admin": false, 01:02:34.289 "nvme_io": false, 01:02:34.289 "nvme_io_md": false, 01:02:34.289 "nvme_iov_md": false, 01:02:34.289 "read": true, 01:02:34.289 "reset": true, 01:02:34.289 "seek_data": false, 01:02:34.289 "seek_hole": false, 01:02:34.289 "unmap": true, 01:02:34.289 "write": true, 01:02:34.289 "write_zeroes": true, 01:02:34.289 "zcopy": true, 01:02:34.289 "zone_append": false, 01:02:34.289 "zone_management": false 01:02:34.289 }, 01:02:34.289 "uuid": "9cc76644-9b2b-49ca-bbfe-2ff1493e56de", 01:02:34.289 "zoned": false 01:02:34.289 } 01:02:34.289 ]' 01:02:34.289 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:02:34.547 10:59:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 01:02:37.071 10:59:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:38.002 ************************************ 01:02:38.002 START TEST filesystem_ext4 01:02:38.002 ************************************ 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:02:38.002 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 01:02:38.003 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 01:02:38.003 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 01:02:38.003 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 01:02:38.003 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 01:02:38.003 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 01:02:38.003 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 01:02:38.003 10:59:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 01:02:38.003 mke2fs 1.46.5 (30-Dec-2021) 01:02:38.003 Discarding device blocks: 0/522240 done 01:02:38.003 Creating filesystem with 522240 1k blocks and 130560 inodes 01:02:38.003 Filesystem UUID: 218bfbb7-b5cb-41dc-9034-2836c1bfd2d1 01:02:38.003 Superblock backups stored on blocks: 01:02:38.003 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 01:02:38.003 01:02:38.003 Allocating group tables: 0/64 done 01:02:38.003 Writing inode tables: 0/64 done 01:02:38.003 Creating journal (8192 blocks): done 01:02:38.003 Writing superblocks and filesystem accounting information: 0/64 done 01:02:38.003 01:02:38.003 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 01:02:38.003 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:02:38.003 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 78111 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:02:38.260 01:02:38.260 real 0m0.384s 01:02:38.260 user 0m0.026s 01:02:38.260 sys 0m0.057s 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 01:02:38.260 ************************************ 01:02:38.260 END TEST filesystem_ext4 01:02:38.260 ************************************ 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:38.260 ************************************ 01:02:38.260 START TEST filesystem_btrfs 01:02:38.260 ************************************ 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 01:02:38.260 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 01:02:38.261 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 01:02:38.261 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 01:02:38.261 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 01:02:38.519 btrfs-progs v6.6.2 01:02:38.519 See https://btrfs.readthedocs.io for more information. 01:02:38.519 01:02:38.519 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 01:02:38.519 NOTE: several default settings have changed in version 5.15, please make sure 01:02:38.519 this does not affect your deployments: 01:02:38.519 - DUP for metadata (-m dup) 01:02:38.519 - enabled no-holes (-O no-holes) 01:02:38.519 - enabled free-space-tree (-R free-space-tree) 01:02:38.519 01:02:38.519 Label: (null) 01:02:38.519 UUID: fb9d62df-204f-4742-b9c4-d769b6c1bfdc 01:02:38.519 Node size: 16384 01:02:38.519 Sector size: 4096 01:02:38.519 Filesystem size: 510.00MiB 01:02:38.519 Block group profiles: 01:02:38.519 Data: single 8.00MiB 01:02:38.519 Metadata: DUP 32.00MiB 01:02:38.519 System: DUP 8.00MiB 01:02:38.519 SSD detected: yes 01:02:38.519 Zoned device: no 01:02:38.519 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 01:02:38.519 Runtime features: free-space-tree 01:02:38.519 Checksum: crc32c 01:02:38.519 Number of devices: 1 01:02:38.519 Devices: 01:02:38.519 ID SIZE PATH 01:02:38.519 1 510.00MiB /dev/nvme0n1p1 01:02:38.519 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 78111 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:02:38.519 01:02:38.519 real 0m0.234s 01:02:38.519 user 0m0.023s 01:02:38.519 sys 0m0.060s 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 01:02:38.519 ************************************ 01:02:38.519 END TEST filesystem_btrfs 01:02:38.519 ************************************ 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:38.519 ************************************ 01:02:38.519 START TEST filesystem_xfs 01:02:38.519 ************************************ 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 01:02:38.519 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 01:02:38.520 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 01:02:38.520 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 01:02:38.520 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 01:02:38.520 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 01:02:38.520 10:59:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 01:02:38.777 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 01:02:38.777 = sectsz=512 attr=2, projid32bit=1 01:02:38.777 = crc=1 finobt=1, sparse=1, rmapbt=0 01:02:38.777 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 01:02:38.777 data = bsize=4096 blocks=130560, imaxpct=25 01:02:38.777 = sunit=0 swidth=0 blks 01:02:38.777 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 01:02:38.777 log =internal log bsize=4096 blocks=16384, version=2 01:02:38.777 = sectsz=512 sunit=0 blks, lazy-count=1 01:02:38.777 realtime =none extsz=4096 blocks=0, rtextents=0 01:02:39.343 Discarding blocks...Done. 01:02:39.343 10:59:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 01:02:39.343 10:59:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 78111 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:02:41.872 01:02:41.872 real 0m3.143s 01:02:41.872 user 0m0.024s 01:02:41.872 sys 0m0.053s 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 01:02:41.872 ************************************ 01:02:41.872 END TEST filesystem_xfs 01:02:41.872 ************************************ 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:02:41.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 78111 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 78111 ']' 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 78111 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78111 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:41.872 killing process with pid 78111 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78111' 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 78111 01:02:41.872 10:59:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 78111 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 01:02:42.439 01:02:42.439 real 0m9.313s 01:02:42.439 user 0m35.026s 01:02:42.439 sys 0m1.782s 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:42.439 ************************************ 01:02:42.439 END TEST nvmf_filesystem_no_in_capsule 01:02:42.439 ************************************ 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 01:02:42.439 ************************************ 01:02:42.439 START TEST nvmf_filesystem_in_capsule 01:02:42.439 ************************************ 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78423 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78423 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 78423 ']' 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:02:42.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:42.439 10:59:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:42.439 [2024-07-22 10:59:47.557759] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:42.439 [2024-07-22 10:59:47.557893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:42.697 [2024-07-22 10:59:47.702043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:42.697 [2024-07-22 10:59:47.795753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:42.697 [2024-07-22 10:59:47.795821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:42.697 [2024-07-22 10:59:47.795831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:42.697 [2024-07-22 10:59:47.795839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:42.697 [2024-07-22 10:59:47.795846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:42.697 [2024-07-22 10:59:47.796049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:42.697 [2024-07-22 10:59:47.796213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:42.697 [2024-07-22 10:59:47.796344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:42.697 [2024-07-22 10:59:47.796348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:43.632 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:43.632 [2024-07-22 10:59:48.638533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:43.633 Malloc1 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:43.633 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:43.890 [2024-07-22 10:59:48.856617] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 01:02:43.890 { 01:02:43.890 "aliases": [ 01:02:43.890 "9fdc92a7-a1c4-49a2-9de5-5b17da9a408d" 01:02:43.890 ], 01:02:43.890 "assigned_rate_limits": { 01:02:43.890 "r_mbytes_per_sec": 0, 01:02:43.890 "rw_ios_per_sec": 0, 01:02:43.890 "rw_mbytes_per_sec": 0, 01:02:43.890 "w_mbytes_per_sec": 0 01:02:43.890 }, 01:02:43.890 "block_size": 512, 01:02:43.890 "claim_type": "exclusive_write", 01:02:43.890 "claimed": true, 01:02:43.890 "driver_specific": {}, 01:02:43.890 "memory_domains": [ 01:02:43.890 { 01:02:43.890 "dma_device_id": "system", 01:02:43.890 "dma_device_type": 1 01:02:43.890 }, 01:02:43.890 { 01:02:43.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:02:43.890 "dma_device_type": 2 01:02:43.890 } 01:02:43.890 ], 01:02:43.890 "name": "Malloc1", 01:02:43.890 "num_blocks": 1048576, 01:02:43.890 "product_name": "Malloc disk", 01:02:43.890 "supported_io_types": { 01:02:43.890 "abort": true, 01:02:43.890 "compare": false, 01:02:43.890 "compare_and_write": false, 01:02:43.890 "copy": true, 01:02:43.890 "flush": true, 01:02:43.890 "get_zone_info": false, 01:02:43.890 "nvme_admin": false, 01:02:43.890 "nvme_io": false, 01:02:43.890 "nvme_io_md": false, 01:02:43.890 "nvme_iov_md": false, 01:02:43.890 "read": true, 01:02:43.890 "reset": true, 01:02:43.890 "seek_data": false, 01:02:43.890 "seek_hole": false, 01:02:43.890 "unmap": true, 01:02:43.890 "write": true, 01:02:43.890 "write_zeroes": true, 01:02:43.890 "zcopy": true, 01:02:43.890 "zone_append": false, 01:02:43.890 "zone_management": false 01:02:43.890 }, 01:02:43.890 "uuid": "9fdc92a7-a1c4-49a2-9de5-5b17da9a408d", 01:02:43.890 "zoned": false 01:02:43.890 } 01:02:43.890 ]' 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 01:02:43.890 10:59:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 01:02:43.890 10:59:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 01:02:43.890 10:59:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:02:44.147 10:59:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 01:02:44.147 10:59:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 01:02:44.147 10:59:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:02:44.147 10:59:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:02:44.147 10:59:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 01:02:46.044 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 01:02:46.045 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 01:02:46.302 10:59:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:47.236 ************************************ 01:02:47.236 START TEST filesystem_in_capsule_ext4 01:02:47.236 ************************************ 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 01:02:47.236 mke2fs 1.46.5 (30-Dec-2021) 01:02:47.236 Discarding device blocks: 0/522240 done 01:02:47.236 Creating filesystem with 522240 1k blocks and 130560 inodes 01:02:47.236 Filesystem UUID: 6a1b77ee-fd44-4f64-bf60-19b2ae8a800f 01:02:47.236 Superblock backups stored on blocks: 01:02:47.236 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 01:02:47.236 01:02:47.236 Allocating group tables: 0/64 done 01:02:47.236 Writing inode tables: 0/64 done 01:02:47.236 Creating journal (8192 blocks): done 01:02:47.236 Writing superblocks and filesystem accounting information: 0/64 done 01:02:47.236 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 01:02:47.236 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 78423 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:02:47.494 01:02:47.494 real 0m0.377s 01:02:47.494 user 0m0.024s 01:02:47.494 sys 0m0.056s 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:47.494 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 01:02:47.494 ************************************ 01:02:47.494 END TEST filesystem_in_capsule_ext4 01:02:47.494 ************************************ 01:02:47.752 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 01:02:47.752 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 01:02:47.752 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:02:47.752 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:47.752 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:47.752 ************************************ 01:02:47.752 START TEST filesystem_in_capsule_btrfs 01:02:47.752 ************************************ 01:02:47.752 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 01:02:47.753 btrfs-progs v6.6.2 01:02:47.753 See https://btrfs.readthedocs.io for more information. 01:02:47.753 01:02:47.753 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 01:02:47.753 NOTE: several default settings have changed in version 5.15, please make sure 01:02:47.753 this does not affect your deployments: 01:02:47.753 - DUP for metadata (-m dup) 01:02:47.753 - enabled no-holes (-O no-holes) 01:02:47.753 - enabled free-space-tree (-R free-space-tree) 01:02:47.753 01:02:47.753 Label: (null) 01:02:47.753 UUID: ea651e63-5025-4cfe-afac-d7c06570505c 01:02:47.753 Node size: 16384 01:02:47.753 Sector size: 4096 01:02:47.753 Filesystem size: 510.00MiB 01:02:47.753 Block group profiles: 01:02:47.753 Data: single 8.00MiB 01:02:47.753 Metadata: DUP 32.00MiB 01:02:47.753 System: DUP 8.00MiB 01:02:47.753 SSD detected: yes 01:02:47.753 Zoned device: no 01:02:47.753 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 01:02:47.753 Runtime features: free-space-tree 01:02:47.753 Checksum: crc32c 01:02:47.753 Number of devices: 1 01:02:47.753 Devices: 01:02:47.753 ID SIZE PATH 01:02:47.753 1 510.00MiB /dev/nvme0n1p1 01:02:47.753 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 78423 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:02:47.753 01:02:47.753 real 0m0.231s 01:02:47.753 user 0m0.022s 01:02:47.753 sys 0m0.061s 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:47.753 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 01:02:47.753 ************************************ 01:02:47.753 END TEST filesystem_in_capsule_btrfs 01:02:47.753 ************************************ 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:48.011 ************************************ 01:02:48.011 START TEST filesystem_in_capsule_xfs 01:02:48.011 ************************************ 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 01:02:48.011 10:59:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 01:02:48.011 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 01:02:48.011 = sectsz=512 attr=2, projid32bit=1 01:02:48.011 = crc=1 finobt=1, sparse=1, rmapbt=0 01:02:48.011 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 01:02:48.011 data = bsize=4096 blocks=130560, imaxpct=25 01:02:48.011 = sunit=0 swidth=0 blks 01:02:48.011 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 01:02:48.011 log =internal log bsize=4096 blocks=16384, version=2 01:02:48.011 = sectsz=512 sunit=0 blks, lazy-count=1 01:02:48.011 realtime =none extsz=4096 blocks=0, rtextents=0 01:02:48.575 Discarding blocks...Done. 01:02:48.575 10:59:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 01:02:48.576 10:59:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 78423 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:02:50.473 01:02:50.473 real 0m2.612s 01:02:50.473 user 0m0.023s 01:02:50.473 sys 0m0.056s 01:02:50.473 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:50.474 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 01:02:50.474 ************************************ 01:02:50.474 END TEST filesystem_in_capsule_xfs 01:02:50.474 ************************************ 01:02:50.474 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 01:02:50.474 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 01:02:50.474 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:02:50.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 78423 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 78423 ']' 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 78423 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78423 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:50.732 killing process with pid 78423 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78423' 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 78423 01:02:50.732 10:59:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 78423 01:02:51.295 10:59:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 01:02:51.295 01:02:51.295 real 0m8.849s 01:02:51.295 user 0m33.460s 01:02:51.296 sys 0m1.541s 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:51.296 ************************************ 01:02:51.296 END TEST nvmf_filesystem_in_capsule 01:02:51.296 ************************************ 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:02:51.296 rmmod nvme_tcp 01:02:51.296 rmmod nvme_fabrics 01:02:51.296 rmmod nvme_keyring 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:51.296 10:59:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:02:51.553 01:02:51.553 real 0m18.998s 01:02:51.553 user 1m8.729s 01:02:51.553 sys 0m3.707s 01:02:51.553 10:59:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:51.553 10:59:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 01:02:51.553 ************************************ 01:02:51.553 END TEST nvmf_filesystem 01:02:51.553 ************************************ 01:02:51.553 10:59:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:02:51.553 10:59:56 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 01:02:51.553 10:59:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:51.553 10:59:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:51.553 10:59:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:51.553 ************************************ 01:02:51.553 START TEST nvmf_target_discovery 01:02:51.553 ************************************ 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 01:02:51.553 * Looking for test storage... 01:02:51.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:02:51.553 Cannot find device "nvmf_tgt_br" 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:02:51.553 Cannot find device "nvmf_tgt_br2" 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:02:51.553 Cannot find device "nvmf_tgt_br" 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:02:51.553 Cannot find device "nvmf_tgt_br2" 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 01:02:51.553 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:51.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:51.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:51.811 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:02:51.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:51.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 01:02:51.811 01:02:51.811 --- 10.0.0.2 ping statistics --- 01:02:51.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:51.812 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:02:51.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:51.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 01:02:51.812 01:02:51.812 --- 10.0.0.3 ping statistics --- 01:02:51.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:51.812 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:51.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:51.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 01:02:51.812 01:02:51.812 --- 10.0.0.1 ping statistics --- 01:02:51.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:51.812 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=78882 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 78882 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 78882 ']' 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:51.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:51.812 10:59:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:02:52.069 [2024-07-22 10:59:57.048901] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:52.069 [2024-07-22 10:59:57.049014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:52.069 [2024-07-22 10:59:57.192763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:52.327 [2024-07-22 10:59:57.285990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:52.327 [2024-07-22 10:59:57.286045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:52.327 [2024-07-22 10:59:57.286059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:52.327 [2024-07-22 10:59:57.286070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:52.327 [2024-07-22 10:59:57.286079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:52.327 [2024-07-22 10:59:57.286443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:52.327 [2024-07-22 10:59:57.286698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:52.327 [2024-07-22 10:59:57.286808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:52.327 [2024-07-22 10:59:57.286815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:52.892 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:52.892 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 01:02:52.892 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:02:52.892 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:52.892 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 [2024-07-22 10:59:58.109906] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 Null1 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 [2024-07-22 10:59:58.168495] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 Null2 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.150 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 Null3 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 Null4 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 4420 01:02:53.151 01:02:53.151 Discovery Log Number of Records 6, Generation counter 6 01:02:53.151 =====Discovery Log Entry 0====== 01:02:53.151 trtype: tcp 01:02:53.151 adrfam: ipv4 01:02:53.151 subtype: current discovery subsystem 01:02:53.151 treq: not required 01:02:53.151 portid: 0 01:02:53.151 trsvcid: 4420 01:02:53.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:02:53.151 traddr: 10.0.0.2 01:02:53.151 eflags: explicit discovery connections, duplicate discovery information 01:02:53.151 sectype: none 01:02:53.151 =====Discovery Log Entry 1====== 01:02:53.151 trtype: tcp 01:02:53.151 adrfam: ipv4 01:02:53.151 subtype: nvme subsystem 01:02:53.151 treq: not required 01:02:53.151 portid: 0 01:02:53.151 trsvcid: 4420 01:02:53.151 subnqn: nqn.2016-06.io.spdk:cnode1 01:02:53.151 traddr: 10.0.0.2 01:02:53.151 eflags: none 01:02:53.151 sectype: none 01:02:53.151 =====Discovery Log Entry 2====== 01:02:53.151 trtype: tcp 01:02:53.151 adrfam: ipv4 01:02:53.151 subtype: nvme subsystem 01:02:53.151 treq: not required 01:02:53.151 portid: 0 01:02:53.151 trsvcid: 4420 01:02:53.151 subnqn: nqn.2016-06.io.spdk:cnode2 01:02:53.151 traddr: 10.0.0.2 01:02:53.151 eflags: none 01:02:53.151 sectype: none 01:02:53.151 =====Discovery Log Entry 3====== 01:02:53.151 trtype: tcp 01:02:53.151 adrfam: ipv4 01:02:53.151 subtype: nvme subsystem 01:02:53.151 treq: not required 01:02:53.151 portid: 0 01:02:53.151 trsvcid: 4420 01:02:53.151 subnqn: nqn.2016-06.io.spdk:cnode3 01:02:53.151 traddr: 10.0.0.2 01:02:53.151 eflags: none 01:02:53.151 sectype: none 01:02:53.151 =====Discovery Log Entry 4====== 01:02:53.151 trtype: tcp 01:02:53.151 adrfam: ipv4 01:02:53.151 subtype: nvme subsystem 01:02:53.151 treq: not required 01:02:53.151 portid: 0 01:02:53.151 trsvcid: 4420 01:02:53.151 subnqn: nqn.2016-06.io.spdk:cnode4 01:02:53.151 traddr: 10.0.0.2 01:02:53.151 eflags: none 01:02:53.151 sectype: none 01:02:53.151 =====Discovery Log Entry 5====== 01:02:53.151 trtype: tcp 01:02:53.151 adrfam: ipv4 01:02:53.151 subtype: discovery subsystem referral 01:02:53.151 treq: not required 01:02:53.151 portid: 0 01:02:53.151 trsvcid: 4430 01:02:53.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:02:53.151 traddr: 10.0.0.2 01:02:53.151 eflags: none 01:02:53.151 sectype: none 01:02:53.151 Perform nvmf subsystem discovery via RPC 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.151 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.410 [ 01:02:53.410 { 01:02:53.410 "allow_any_host": true, 01:02:53.410 "hosts": [], 01:02:53.410 "listen_addresses": [ 01:02:53.410 { 01:02:53.410 "adrfam": "IPv4", 01:02:53.410 "traddr": "10.0.0.2", 01:02:53.410 "trsvcid": "4420", 01:02:53.410 "trtype": "TCP" 01:02:53.410 } 01:02:53.410 ], 01:02:53.410 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:02:53.410 "subtype": "Discovery" 01:02:53.410 }, 01:02:53.410 { 01:02:53.410 "allow_any_host": true, 01:02:53.410 "hosts": [], 01:02:53.410 "listen_addresses": [ 01:02:53.410 { 01:02:53.410 "adrfam": "IPv4", 01:02:53.410 "traddr": "10.0.0.2", 01:02:53.410 "trsvcid": "4420", 01:02:53.410 "trtype": "TCP" 01:02:53.410 } 01:02:53.410 ], 01:02:53.410 "max_cntlid": 65519, 01:02:53.410 "max_namespaces": 32, 01:02:53.410 "min_cntlid": 1, 01:02:53.410 "model_number": "SPDK bdev Controller", 01:02:53.410 "namespaces": [ 01:02:53.410 { 01:02:53.410 "bdev_name": "Null1", 01:02:53.410 "name": "Null1", 01:02:53.410 "nguid": "F07532F989784583ADC601F821937890", 01:02:53.410 "nsid": 1, 01:02:53.410 "uuid": "f07532f9-8978-4583-adc6-01f821937890" 01:02:53.410 } 01:02:53.410 ], 01:02:53.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:53.410 "serial_number": "SPDK00000000000001", 01:02:53.410 "subtype": "NVMe" 01:02:53.410 }, 01:02:53.410 { 01:02:53.410 "allow_any_host": true, 01:02:53.410 "hosts": [], 01:02:53.410 "listen_addresses": [ 01:02:53.410 { 01:02:53.410 "adrfam": "IPv4", 01:02:53.410 "traddr": "10.0.0.2", 01:02:53.410 "trsvcid": "4420", 01:02:53.410 "trtype": "TCP" 01:02:53.410 } 01:02:53.410 ], 01:02:53.410 "max_cntlid": 65519, 01:02:53.410 "max_namespaces": 32, 01:02:53.410 "min_cntlid": 1, 01:02:53.410 "model_number": "SPDK bdev Controller", 01:02:53.410 "namespaces": [ 01:02:53.410 { 01:02:53.410 "bdev_name": "Null2", 01:02:53.410 "name": "Null2", 01:02:53.410 "nguid": "0FA727ADDED74824B60763500B43E61F", 01:02:53.410 "nsid": 1, 01:02:53.410 "uuid": "0fa727ad-ded7-4824-b607-63500b43e61f" 01:02:53.410 } 01:02:53.410 ], 01:02:53.410 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:02:53.410 "serial_number": "SPDK00000000000002", 01:02:53.410 "subtype": "NVMe" 01:02:53.410 }, 01:02:53.410 { 01:02:53.410 "allow_any_host": true, 01:02:53.410 "hosts": [], 01:02:53.410 "listen_addresses": [ 01:02:53.410 { 01:02:53.410 "adrfam": "IPv4", 01:02:53.410 "traddr": "10.0.0.2", 01:02:53.410 "trsvcid": "4420", 01:02:53.410 "trtype": "TCP" 01:02:53.410 } 01:02:53.410 ], 01:02:53.410 "max_cntlid": 65519, 01:02:53.410 "max_namespaces": 32, 01:02:53.410 "min_cntlid": 1, 01:02:53.410 "model_number": "SPDK bdev Controller", 01:02:53.410 "namespaces": [ 01:02:53.410 { 01:02:53.410 "bdev_name": "Null3", 01:02:53.410 "name": "Null3", 01:02:53.410 "nguid": "8E94338BFA8E4AC988C4FDCA75981D11", 01:02:53.410 "nsid": 1, 01:02:53.410 "uuid": "8e94338b-fa8e-4ac9-88c4-fdca75981d11" 01:02:53.410 } 01:02:53.410 ], 01:02:53.410 "nqn": "nqn.2016-06.io.spdk:cnode3", 01:02:53.410 "serial_number": "SPDK00000000000003", 01:02:53.410 "subtype": "NVMe" 01:02:53.410 }, 01:02:53.411 { 01:02:53.411 "allow_any_host": true, 01:02:53.411 "hosts": [], 01:02:53.411 "listen_addresses": [ 01:02:53.411 { 01:02:53.411 "adrfam": "IPv4", 01:02:53.411 "traddr": "10.0.0.2", 01:02:53.411 "trsvcid": "4420", 01:02:53.411 "trtype": "TCP" 01:02:53.411 } 01:02:53.411 ], 01:02:53.411 "max_cntlid": 65519, 01:02:53.411 "max_namespaces": 32, 01:02:53.411 "min_cntlid": 1, 01:02:53.411 "model_number": "SPDK bdev Controller", 01:02:53.411 "namespaces": [ 01:02:53.411 { 01:02:53.411 "bdev_name": "Null4", 01:02:53.411 "name": "Null4", 01:02:53.411 "nguid": "F2E5B962DBE94E7B958D826F7E95C0CF", 01:02:53.411 "nsid": 1, 01:02:53.411 "uuid": "f2e5b962-dbe9-4e7b-958d-826f7e95c0cf" 01:02:53.411 } 01:02:53.411 ], 01:02:53.411 "nqn": "nqn.2016-06.io.spdk:cnode4", 01:02:53.411 "serial_number": "SPDK00000000000004", 01:02:53.411 "subtype": "NVMe" 01:02:53.411 } 01:02:53.411 ] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:02:53.411 rmmod nvme_tcp 01:02:53.411 rmmod nvme_fabrics 01:02:53.411 rmmod nvme_keyring 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 78882 ']' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 78882 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 78882 ']' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 78882 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78882 01:02:53.411 killing process with pid 78882 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78882' 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 78882 01:02:53.411 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 78882 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:53.670 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:53.928 10:59:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:02:53.928 01:02:53.928 real 0m2.332s 01:02:53.928 user 0m6.513s 01:02:53.928 sys 0m0.620s 01:02:53.928 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:53.928 10:59:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:02:53.928 ************************************ 01:02:53.928 END TEST nvmf_target_discovery 01:02:53.928 ************************************ 01:02:53.928 10:59:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:02:53.928 10:59:58 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 01:02:53.928 10:59:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:53.928 10:59:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:53.928 10:59:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:53.928 ************************************ 01:02:53.928 START TEST nvmf_referrals 01:02:53.928 ************************************ 01:02:53.928 10:59:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 01:02:53.928 * Looking for test storage... 01:02:53.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:02:53.928 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:02:53.929 Cannot find device "nvmf_tgt_br" 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:02:53.929 Cannot find device "nvmf_tgt_br2" 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:02:53.929 Cannot find device "nvmf_tgt_br" 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:02:53.929 Cannot find device "nvmf_tgt_br2" 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:02:53.929 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:54.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:54.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:54.186 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:02:54.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:54.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 01:02:54.186 01:02:54.186 --- 10.0.0.2 ping statistics --- 01:02:54.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:54.186 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:02:54.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:54.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 01:02:54.187 01:02:54.187 --- 10.0.0.3 ping statistics --- 01:02:54.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:54.187 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:54.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:54.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:02:54.187 01:02:54.187 --- 10.0.0.1 ping statistics --- 01:02:54.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:54.187 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=79105 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 79105 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 79105 ']' 01:02:54.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:54.187 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.444 [2024-07-22 10:59:59.435308] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:54.445 [2024-07-22 10:59:59.435380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:54.445 [2024-07-22 10:59:59.575386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:54.701 [2024-07-22 10:59:59.656781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:54.701 [2024-07-22 10:59:59.656841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:54.701 [2024-07-22 10:59:59.656855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:54.701 [2024-07-22 10:59:59.656866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:54.701 [2024-07-22 10:59:59.656875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:54.701 [2024-07-22 10:59:59.657052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:54.701 [2024-07-22 10:59:59.657134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:54.701 [2024-07-22 10:59:59.660005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:54.701 [2024-07-22 10:59:59.660022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:54.701 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:54.701 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.702 [2024-07-22 10:59:59.831044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.702 [2024-07-22 10:59:59.854318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 01:02:54.702 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:02:54.958 10:59:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:54.958 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.215 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 01:02:55.472 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:02:55.472 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 01:02:55.472 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 01:02:55.472 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 01:02:55.472 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.472 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 01:02:55.472 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 01:02:55.473 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 01:02:55.729 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -a 10.0.0.2 -s 8009 -o json 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 01:02:55.730 11:00:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 01:02:55.988 11:00:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 01:02:55.988 11:00:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 01:02:55.988 11:00:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:02:55.988 11:00:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 01:02:55.988 11:00:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 01:02:55.988 11:00:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:02:55.988 rmmod nvme_tcp 01:02:55.988 rmmod nvme_fabrics 01:02:55.988 rmmod nvme_keyring 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 79105 ']' 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 79105 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 79105 ']' 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 79105 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79105 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:02:55.988 killing process with pid 79105 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79105' 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 79105 01:02:55.988 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 79105 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:02:56.245 ************************************ 01:02:56.245 END TEST nvmf_referrals 01:02:56.245 ************************************ 01:02:56.245 01:02:56.245 real 0m2.391s 01:02:56.245 user 0m7.141s 01:02:56.245 sys 0m0.786s 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 01:02:56.245 11:00:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:02:56.245 11:00:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:02:56.245 11:00:01 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 01:02:56.245 11:00:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:02:56.245 11:00:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:02:56.245 11:00:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:56.245 ************************************ 01:02:56.245 START TEST nvmf_connect_disconnect 01:02:56.245 ************************************ 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 01:02:56.245 * Looking for test storage... 01:02:56.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:56.245 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:02:56.502 Cannot find device "nvmf_tgt_br" 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:02:56.502 Cannot find device "nvmf_tgt_br2" 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:02:56.502 Cannot find device "nvmf_tgt_br" 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:02:56.502 Cannot find device "nvmf_tgt_br2" 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:56.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:56.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:56.502 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:02:56.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:56.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 01:02:56.759 01:02:56.759 --- 10.0.0.2 ping statistics --- 01:02:56.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:56.759 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:02:56.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:56.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 01:02:56.759 01:02:56.759 --- 10.0.0.3 ping statistics --- 01:02:56.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:56.759 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:56.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:56.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 01:02:56.759 01:02:56.759 --- 10.0.0.1 ping statistics --- 01:02:56.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:56.759 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=79395 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 79395 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 79395 ']' 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:56.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 01:02:56.759 11:00:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:56.759 [2024-07-22 11:00:01.854823] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:02:56.759 [2024-07-22 11:00:01.854882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:57.016 [2024-07-22 11:00:01.989139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:57.016 [2024-07-22 11:00:02.050631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:57.016 [2024-07-22 11:00:02.050700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:57.016 [2024-07-22 11:00:02.050710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:57.016 [2024-07-22 11:00:02.050718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:57.016 [2024-07-22 11:00:02.050725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:57.016 [2024-07-22 11:00:02.050835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:02:57.016 [2024-07-22 11:00:02.050999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:02:57.016 [2024-07-22 11:00:02.051137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:02:57.016 [2024-07-22 11:00:02.051140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:57.016 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:57.273 [2024-07-22 11:00:02.222605] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:02:57.273 [2024-07-22 11:00:02.291131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 01:02:57.273 11:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 01:02:59.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:01.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:04.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:06.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:08.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:10.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:13.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:14.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:17.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:19.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:21.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:23.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:26.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:28.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:30.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:32.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:35.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:37.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:39.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:42.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:44.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:46.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:48.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:51.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:52.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:55.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:57.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:59.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:01.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:04.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:06.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:08.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:10.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:13.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:15.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:17.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:19.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:22.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:24.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:26.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:28.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:31.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:32.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:35.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:37.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:39.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:41.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:44.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:46.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:48.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:50.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:53.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:55.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:57.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:00.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:02.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:04.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:06.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:09.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:10.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:13.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:15.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:17.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:19.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:22.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:24.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:26.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:29.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:31.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:33.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:35.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:37.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:40.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:42.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:44.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:47.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:49.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:51.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:53.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:56.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:05:58.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:00.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:02.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:04.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:06.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:09.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:11.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:13.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:16.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:18.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:20.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:22.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:25.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:27.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:29.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:31.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:34.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:35.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:38.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:40.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:40.407 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 01:06:40.407 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:40.408 rmmod nvme_tcp 01:06:40.408 rmmod nvme_fabrics 01:06:40.408 rmmod nvme_keyring 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 79395 ']' 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 79395 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 79395 ']' 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 79395 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79395 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:40.408 killing process with pid 79395 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79395' 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 79395 01:06:40.408 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 79395 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:40.666 ************************************ 01:06:40.666 END TEST nvmf_connect_disconnect 01:06:40.666 ************************************ 01:06:40.666 01:06:40.666 real 3m44.371s 01:06:40.666 user 14m31.919s 01:06:40.666 sys 0m23.833s 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:40.666 11:03:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:06:40.666 11:03:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:40.666 11:03:45 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 01:06:40.666 11:03:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:40.666 11:03:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:40.666 11:03:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:40.666 ************************************ 01:06:40.666 START TEST nvmf_multitarget 01:06:40.666 ************************************ 01:06:40.666 11:03:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 01:06:40.666 * Looking for test storage... 01:06:40.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:40.924 11:03:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:40.924 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 01:06:40.924 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:40.924 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:40.924 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:40.925 Cannot find device "nvmf_tgt_br" 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:40.925 Cannot find device "nvmf_tgt_br2" 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:40.925 Cannot find device "nvmf_tgt_br" 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:40.925 Cannot find device "nvmf_tgt_br2" 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:40.925 11:03:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:40.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:40.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:40.925 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:41.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:41.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 01:06:41.193 01:06:41.193 --- 10.0.0.2 ping statistics --- 01:06:41.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:41.193 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:41.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:41.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:06:41.193 01:06:41.193 --- 10.0.0.3 ping statistics --- 01:06:41.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:41.193 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:41.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:41.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:06:41.193 01:06:41.193 --- 10.0.0.1 ping statistics --- 01:06:41.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:41.193 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=83144 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 83144 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 83144 ']' 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:06:41.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:41.193 11:03:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:06:41.193 [2024-07-22 11:03:46.342094] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:41.193 [2024-07-22 11:03:46.342184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:41.451 [2024-07-22 11:03:46.487884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:06:41.451 [2024-07-22 11:03:46.591350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:41.451 [2024-07-22 11:03:46.591676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:41.451 [2024-07-22 11:03:46.591833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:41.451 [2024-07-22 11:03:46.591902] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:41.451 [2024-07-22 11:03:46.592066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:41.451 [2024-07-22 11:03:46.592227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:41.451 [2024-07-22 11:03:46.592645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:06:41.451 [2024-07-22 11:03:46.592648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:41.451 [2024-07-22 11:03:46.592813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 01:06:42.381 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 01:06:42.639 "nvmf_tgt_1" 01:06:42.639 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 01:06:42.639 "nvmf_tgt_2" 01:06:42.639 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 01:06:42.639 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 01:06:42.897 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 01:06:42.897 11:03:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 01:06:42.897 true 01:06:42.897 11:03:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 01:06:43.155 true 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 01:06:43.155 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:06:43.155 rmmod nvme_tcp 01:06:43.155 rmmod nvme_fabrics 01:06:43.155 rmmod nvme_keyring 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 83144 ']' 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 83144 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 83144 ']' 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 83144 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:06:43.413 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83144 01:06:43.413 killing process with pid 83144 01:06:43.414 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:06:43.414 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:06:43.414 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83144' 01:06:43.414 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 83144 01:06:43.414 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 83144 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:06:43.672 ************************************ 01:06:43.672 END TEST nvmf_multitarget 01:06:43.672 ************************************ 01:06:43.672 01:06:43.672 real 0m2.892s 01:06:43.672 user 0m9.307s 01:06:43.672 sys 0m0.776s 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 01:06:43.672 11:03:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:06:43.672 11:03:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:06:43.672 11:03:48 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 01:06:43.672 11:03:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:06:43.672 11:03:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:06:43.672 11:03:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:06:43.672 ************************************ 01:06:43.672 START TEST nvmf_rpc 01:06:43.672 ************************************ 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 01:06:43.672 * Looking for test storage... 01:06:43.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:06:43.672 Cannot find device "nvmf_tgt_br" 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 01:06:43.672 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:06:43.672 Cannot find device "nvmf_tgt_br2" 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:06:43.930 Cannot find device "nvmf_tgt_br" 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:06:43.930 Cannot find device "nvmf_tgt_br2" 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:43.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:43.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:43.930 11:03:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:06:43.930 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:44.187 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:44.187 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:44.187 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:44.187 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:06:44.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:44.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 01:06:44.187 01:06:44.187 --- 10.0.0.2 ping statistics --- 01:06:44.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:44.188 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:06:44.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:44.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 01:06:44.188 01:06:44.188 --- 10.0.0.3 ping statistics --- 01:06:44.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:44.188 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:44.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:44.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:06:44.188 01:06:44.188 --- 10.0.0.1 ping statistics --- 01:06:44.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:44.188 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=83378 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 83378 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 83378 ']' 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:44.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 01:06:44.188 11:03:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:44.188 [2024-07-22 11:03:49.264627] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:06:44.188 [2024-07-22 11:03:49.264905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:44.446 [2024-07-22 11:03:49.400265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:06:44.446 [2024-07-22 11:03:49.497236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:44.446 [2024-07-22 11:03:49.497279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:44.446 [2024-07-22 11:03:49.497305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:44.446 [2024-07-22 11:03:49.497327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:44.446 [2024-07-22 11:03:49.497350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:44.446 [2024-07-22 11:03:49.497525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:06:44.446 [2024-07-22 11:03:49.498103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:06:44.446 [2024-07-22 11:03:49.498194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:06:44.446 [2024-07-22 11:03:49.498187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.011 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 01:06:45.011 "poll_groups": [ 01:06:45.011 { 01:06:45.011 "admin_qpairs": 0, 01:06:45.011 "completed_nvme_io": 0, 01:06:45.011 "current_admin_qpairs": 0, 01:06:45.011 "current_io_qpairs": 0, 01:06:45.011 "io_qpairs": 0, 01:06:45.011 "name": "nvmf_tgt_poll_group_000", 01:06:45.011 "pending_bdev_io": 0, 01:06:45.011 "transports": [] 01:06:45.011 }, 01:06:45.011 { 01:06:45.011 "admin_qpairs": 0, 01:06:45.011 "completed_nvme_io": 0, 01:06:45.011 "current_admin_qpairs": 0, 01:06:45.011 "current_io_qpairs": 0, 01:06:45.011 "io_qpairs": 0, 01:06:45.011 "name": "nvmf_tgt_poll_group_001", 01:06:45.011 "pending_bdev_io": 0, 01:06:45.011 "transports": [] 01:06:45.011 }, 01:06:45.011 { 01:06:45.011 "admin_qpairs": 0, 01:06:45.011 "completed_nvme_io": 0, 01:06:45.011 "current_admin_qpairs": 0, 01:06:45.011 "current_io_qpairs": 0, 01:06:45.011 "io_qpairs": 0, 01:06:45.011 "name": "nvmf_tgt_poll_group_002", 01:06:45.011 "pending_bdev_io": 0, 01:06:45.011 "transports": [] 01:06:45.011 }, 01:06:45.011 { 01:06:45.011 "admin_qpairs": 0, 01:06:45.011 "completed_nvme_io": 0, 01:06:45.011 "current_admin_qpairs": 0, 01:06:45.011 "current_io_qpairs": 0, 01:06:45.011 "io_qpairs": 0, 01:06:45.011 "name": "nvmf_tgt_poll_group_003", 01:06:45.011 "pending_bdev_io": 0, 01:06:45.011 "transports": [] 01:06:45.011 } 01:06:45.011 ], 01:06:45.011 "tick_rate": 2200000000 01:06:45.011 }' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.269 [2024-07-22 11:03:50.321858] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 01:06:45.269 "poll_groups": [ 01:06:45.269 { 01:06:45.269 "admin_qpairs": 0, 01:06:45.269 "completed_nvme_io": 0, 01:06:45.269 "current_admin_qpairs": 0, 01:06:45.269 "current_io_qpairs": 0, 01:06:45.269 "io_qpairs": 0, 01:06:45.269 "name": "nvmf_tgt_poll_group_000", 01:06:45.269 "pending_bdev_io": 0, 01:06:45.269 "transports": [ 01:06:45.269 { 01:06:45.269 "trtype": "TCP" 01:06:45.269 } 01:06:45.269 ] 01:06:45.269 }, 01:06:45.269 { 01:06:45.269 "admin_qpairs": 0, 01:06:45.269 "completed_nvme_io": 0, 01:06:45.269 "current_admin_qpairs": 0, 01:06:45.269 "current_io_qpairs": 0, 01:06:45.269 "io_qpairs": 0, 01:06:45.269 "name": "nvmf_tgt_poll_group_001", 01:06:45.269 "pending_bdev_io": 0, 01:06:45.269 "transports": [ 01:06:45.269 { 01:06:45.269 "trtype": "TCP" 01:06:45.269 } 01:06:45.269 ] 01:06:45.269 }, 01:06:45.269 { 01:06:45.269 "admin_qpairs": 0, 01:06:45.269 "completed_nvme_io": 0, 01:06:45.269 "current_admin_qpairs": 0, 01:06:45.269 "current_io_qpairs": 0, 01:06:45.269 "io_qpairs": 0, 01:06:45.269 "name": "nvmf_tgt_poll_group_002", 01:06:45.269 "pending_bdev_io": 0, 01:06:45.269 "transports": [ 01:06:45.269 { 01:06:45.269 "trtype": "TCP" 01:06:45.269 } 01:06:45.269 ] 01:06:45.269 }, 01:06:45.269 { 01:06:45.269 "admin_qpairs": 0, 01:06:45.269 "completed_nvme_io": 0, 01:06:45.269 "current_admin_qpairs": 0, 01:06:45.269 "current_io_qpairs": 0, 01:06:45.269 "io_qpairs": 0, 01:06:45.269 "name": "nvmf_tgt_poll_group_003", 01:06:45.269 "pending_bdev_io": 0, 01:06:45.269 "transports": [ 01:06:45.269 { 01:06:45.269 "trtype": "TCP" 01:06:45.269 } 01:06:45.269 ] 01:06:45.269 } 01:06:45.269 ], 01:06:45.269 "tick_rate": 2200000000 01:06:45.269 }' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.269 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.540 Malloc1 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.540 [2024-07-22 11:03:50.529194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -a 10.0.0.2 -s 4420 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -a 10.0.0.2 -s 4420 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -a 10.0.0.2 -s 4420 01:06:45.540 [2024-07-22 11:03:50.557510] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479' 01:06:45.540 Failed to write to /dev/nvme-fabrics: Input/output error 01:06:45.540 could not add new controller: failed to write to nvme-fabrics device 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 01:06:45.540 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:06:45.541 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:06:45.541 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:06:45.541 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:06:45.541 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:45.541 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:45.541 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:45.541 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:45.812 11:03:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 01:06:45.812 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 01:06:45.812 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:06:45.812 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:06:45.812 11:03:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:06:47.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:47.714 [2024-07-22 11:03:52.858585] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479' 01:06:47.714 Failed to write to /dev/nvme-fabrics: Input/output error 01:06:47.714 could not add new controller: failed to write to nvme-fabrics device 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:47.714 11:03:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:47.976 11:03:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 01:06:47.976 11:03:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 01:06:47.976 11:03:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:06:47.976 11:03:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:06:47.976 11:03:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 01:06:49.879 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:06:49.879 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:06:49.879 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:06:49.879 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:06:49.879 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:06:49.879 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 01:06:49.879 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:06:50.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:50.138 [2024-07-22 11:03:55.163852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:50.138 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:50.139 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:50.139 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:50.397 11:03:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:06:50.397 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 01:06:50.397 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:06:50.397 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:06:50.397 11:03:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:06:52.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:52.296 [2024-07-22 11:03:57.463048] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:52.296 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:52.555 11:03:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:06:52.555 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 01:06:52.555 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:06:52.555 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:06:52.555 11:03:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 01:06:54.451 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:06:54.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:54.709 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:54.710 [2024-07-22 11:03:59.759373] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:54.710 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:54.968 11:03:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:06:54.968 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 01:06:54.968 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:06:54.968 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:06:54.968 11:03:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 01:06:56.894 11:04:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:06:56.894 11:04:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:06:56.894 11:04:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:06:56.894 11:04:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:06:56.894 11:04:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:06:56.894 11:04:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 01:06:56.894 11:04:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:06:56.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:56.894 [2024-07-22 11:04:02.063053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:56.894 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:57.152 11:04:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:06:57.152 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 01:06:57.152 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:06:57.152 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:06:57.152 11:04:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:06:59.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:59.682 [2024-07-22 11:04:04.370989] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:06:59.682 11:04:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:07:01.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 [2024-07-22 11:04:06.690482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 [2024-07-22 11:04:06.738537] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:01.583 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.584 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.584 [2024-07-22 11:04:06.786562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 [2024-07-22 11:04:06.834664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 [2024-07-22 11:04:06.882732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.843 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 01:07:01.844 "poll_groups": [ 01:07:01.844 { 01:07:01.844 "admin_qpairs": 2, 01:07:01.844 "completed_nvme_io": 115, 01:07:01.844 "current_admin_qpairs": 0, 01:07:01.844 "current_io_qpairs": 0, 01:07:01.844 "io_qpairs": 16, 01:07:01.844 "name": "nvmf_tgt_poll_group_000", 01:07:01.844 "pending_bdev_io": 0, 01:07:01.844 "transports": [ 01:07:01.844 { 01:07:01.844 "trtype": "TCP" 01:07:01.844 } 01:07:01.844 ] 01:07:01.844 }, 01:07:01.844 { 01:07:01.844 "admin_qpairs": 3, 01:07:01.844 "completed_nvme_io": 167, 01:07:01.844 "current_admin_qpairs": 0, 01:07:01.844 "current_io_qpairs": 0, 01:07:01.844 "io_qpairs": 17, 01:07:01.844 "name": "nvmf_tgt_poll_group_001", 01:07:01.844 "pending_bdev_io": 0, 01:07:01.844 "transports": [ 01:07:01.844 { 01:07:01.844 "trtype": "TCP" 01:07:01.844 } 01:07:01.844 ] 01:07:01.844 }, 01:07:01.844 { 01:07:01.844 "admin_qpairs": 1, 01:07:01.844 "completed_nvme_io": 70, 01:07:01.844 "current_admin_qpairs": 0, 01:07:01.844 "current_io_qpairs": 0, 01:07:01.844 "io_qpairs": 19, 01:07:01.844 "name": "nvmf_tgt_poll_group_002", 01:07:01.844 "pending_bdev_io": 0, 01:07:01.844 "transports": [ 01:07:01.844 { 01:07:01.844 "trtype": "TCP" 01:07:01.844 } 01:07:01.844 ] 01:07:01.844 }, 01:07:01.844 { 01:07:01.844 "admin_qpairs": 1, 01:07:01.844 "completed_nvme_io": 68, 01:07:01.844 "current_admin_qpairs": 0, 01:07:01.844 "current_io_qpairs": 0, 01:07:01.844 "io_qpairs": 18, 01:07:01.844 "name": "nvmf_tgt_poll_group_003", 01:07:01.844 "pending_bdev_io": 0, 01:07:01.844 "transports": [ 01:07:01.844 { 01:07:01.844 "trtype": "TCP" 01:07:01.844 } 01:07:01.844 ] 01:07:01.844 } 01:07:01.844 ], 01:07:01.844 "tick_rate": 2200000000 01:07:01.844 }' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 01:07:01.844 11:04:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:02.103 rmmod nvme_tcp 01:07:02.103 rmmod nvme_fabrics 01:07:02.103 rmmod nvme_keyring 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 83378 ']' 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 83378 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 83378 ']' 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 83378 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83378 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:07:02.103 killing process with pid 83378 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83378' 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 83378 01:07:02.103 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 83378 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:02.362 01:07:02.362 real 0m18.731s 01:07:02.362 user 1m10.203s 01:07:02.362 sys 0m2.735s 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:02.362 11:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:07:02.362 ************************************ 01:07:02.362 END TEST nvmf_rpc 01:07:02.362 ************************************ 01:07:02.362 11:04:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:02.362 11:04:07 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 01:07:02.362 11:04:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:02.362 11:04:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:02.362 11:04:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:02.362 ************************************ 01:07:02.362 START TEST nvmf_invalid 01:07:02.362 ************************************ 01:07:02.362 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 01:07:02.620 * Looking for test storage... 01:07:02.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:02.620 11:04:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:02.621 Cannot find device "nvmf_tgt_br" 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:02.621 Cannot find device "nvmf_tgt_br2" 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:02.621 Cannot find device "nvmf_tgt_br" 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:02.621 Cannot find device "nvmf_tgt_br2" 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:02.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:02.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:02.621 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:02.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:02.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 01:07:02.880 01:07:02.880 --- 10.0.0.2 ping statistics --- 01:07:02.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:02.880 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:02.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:02.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 01:07:02.880 01:07:02.880 --- 10.0.0.3 ping statistics --- 01:07:02.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:02.880 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:02.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:02.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:07:02.880 01:07:02.880 --- 10.0.0.1 ping statistics --- 01:07:02.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:02.880 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=83889 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 83889 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 83889 ']' 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:02.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:02.880 11:04:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:07:02.880 [2024-07-22 11:04:08.053592] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:02.880 [2024-07-22 11:04:08.053686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:03.138 [2024-07-22 11:04:08.192945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:07:03.138 [2024-07-22 11:04:08.290037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:03.138 [2024-07-22 11:04:08.290123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:03.138 [2024-07-22 11:04:08.290135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:03.138 [2024-07-22 11:04:08.290144] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:03.138 [2024-07-22 11:04:08.290152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:03.138 [2024-07-22 11:04:08.290296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:03.138 [2024-07-22 11:04:08.291059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:07:03.138 [2024-07-22 11:04:08.291133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:07:03.138 [2024-07-22 11:04:08.291137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:07:04.069 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1649 01:07:04.326 [2024-07-22 11:04:09.360380] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 01:07:04.326 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/22 11:04:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1649 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 01:07:04.326 request: 01:07:04.326 { 01:07:04.326 "method": "nvmf_create_subsystem", 01:07:04.326 "params": { 01:07:04.326 "nqn": "nqn.2016-06.io.spdk:cnode1649", 01:07:04.326 "tgt_name": "foobar" 01:07:04.326 } 01:07:04.326 } 01:07:04.326 Got JSON-RPC error response 01:07:04.326 GoRPCClient: error on JSON-RPC call' 01:07:04.326 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/22 11:04:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1649 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 01:07:04.326 request: 01:07:04.326 { 01:07:04.326 "method": "nvmf_create_subsystem", 01:07:04.326 "params": { 01:07:04.326 "nqn": "nqn.2016-06.io.spdk:cnode1649", 01:07:04.326 "tgt_name": "foobar" 01:07:04.326 } 01:07:04.326 } 01:07:04.326 Got JSON-RPC error response 01:07:04.326 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 01:07:04.326 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 01:07:04.326 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12590 01:07:04.583 [2024-07-22 11:04:09.652794] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12590: invalid serial number 'SPDKISFASTANDAWESOME' 01:07:04.583 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/22 11:04:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12590 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 01:07:04.583 request: 01:07:04.583 { 01:07:04.583 "method": "nvmf_create_subsystem", 01:07:04.583 "params": { 01:07:04.583 "nqn": "nqn.2016-06.io.spdk:cnode12590", 01:07:04.583 "serial_number": "SPDKISFASTANDAWESOME\u001f" 01:07:04.583 } 01:07:04.583 } 01:07:04.583 Got JSON-RPC error response 01:07:04.583 GoRPCClient: error on JSON-RPC call' 01:07:04.583 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/22 11:04:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12590 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 01:07:04.583 request: 01:07:04.583 { 01:07:04.583 "method": "nvmf_create_subsystem", 01:07:04.583 "params": { 01:07:04.583 "nqn": "nqn.2016-06.io.spdk:cnode12590", 01:07:04.583 "serial_number": "SPDKISFASTANDAWESOME\u001f" 01:07:04.583 } 01:07:04.583 } 01:07:04.583 Got JSON-RPC error response 01:07:04.583 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 01:07:04.583 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 01:07:04.583 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17049 01:07:04.841 [2024-07-22 11:04:09.901042] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17049: invalid model number 'SPDK_Controller' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/22 11:04:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17049], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 01:07:04.841 request: 01:07:04.841 { 01:07:04.841 "method": "nvmf_create_subsystem", 01:07:04.841 "params": { 01:07:04.841 "nqn": "nqn.2016-06.io.spdk:cnode17049", 01:07:04.841 "model_number": "SPDK_Controller\u001f" 01:07:04.841 } 01:07:04.841 } 01:07:04.841 Got JSON-RPC error response 01:07:04.841 GoRPCClient: error on JSON-RPC call' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/22 11:04:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17049], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 01:07:04.841 request: 01:07:04.841 { 01:07:04.841 "method": "nvmf_create_subsystem", 01:07:04.841 "params": { 01:07:04.841 "nqn": "nqn.2016-06.io.spdk:cnode17049", 01:07:04.841 "model_number": "SPDK_Controller\u001f" 01:07:04.841 } 01:07:04.841 } 01:07:04.841 Got JSON-RPC error response 01:07:04.841 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.841 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ M == \- ]] 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Mv|!qD|LlKDWX&|,b22wp' 01:07:04.842 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Mv|!qD|LlKDWX&|,b22wp' nqn.2016-06.io.spdk:cnode15194 01:07:05.100 [2024-07-22 11:04:10.289481] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15194: invalid serial number 'Mv|!qD|LlKDWX&|,b22wp' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/22 11:04:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15194 serial_number:Mv|!qD|LlKDWX&|,b22wp], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Mv|!qD|LlKDWX&|,b22wp 01:07:05.358 request: 01:07:05.358 { 01:07:05.358 "method": "nvmf_create_subsystem", 01:07:05.358 "params": { 01:07:05.358 "nqn": "nqn.2016-06.io.spdk:cnode15194", 01:07:05.358 "serial_number": "Mv|!qD|LlKDWX&|,b22wp" 01:07:05.358 } 01:07:05.358 } 01:07:05.358 Got JSON-RPC error response 01:07:05.358 GoRPCClient: error on JSON-RPC call' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/22 11:04:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15194 serial_number:Mv|!qD|LlKDWX&|,b22wp], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Mv|!qD|LlKDWX&|,b22wp 01:07:05.358 request: 01:07:05.358 { 01:07:05.358 "method": "nvmf_create_subsystem", 01:07:05.358 "params": { 01:07:05.358 "nqn": "nqn.2016-06.io.spdk:cnode15194", 01:07:05.358 "serial_number": "Mv|!qD|LlKDWX&|,b22wp" 01:07:05.358 } 01:07:05.358 } 01:07:05.358 Got JSON-RPC error response 01:07:05.358 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 01:07:05.358 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'h;+76?kbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S' 01:07:05.359 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'h;+76?kbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S' nqn.2016-06.io.spdk:cnode17845 01:07:05.617 [2024-07-22 11:04:10.785994] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17845: invalid model number 'h;+76?kbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S' 01:07:05.617 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/22 11:04:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:h;+76?kbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S nqn:nqn.2016-06.io.spdk:cnode17845], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN h;+76?kbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S 01:07:05.617 request: 01:07:05.617 { 01:07:05.617 "method": "nvmf_create_subsystem", 01:07:05.617 "params": { 01:07:05.617 "nqn": "nqn.2016-06.io.spdk:cnode17845", 01:07:05.617 "model_number": "h;+76?\u007fkbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S" 01:07:05.617 } 01:07:05.617 } 01:07:05.617 Got JSON-RPC error response 01:07:05.617 GoRPCClient: error on JSON-RPC call' 01:07:05.617 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/22 11:04:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:h;+76?kbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S nqn:nqn.2016-06.io.spdk:cnode17845], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN h;+76?kbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S 01:07:05.617 request: 01:07:05.617 { 01:07:05.617 "method": "nvmf_create_subsystem", 01:07:05.617 "params": { 01:07:05.617 "nqn": "nqn.2016-06.io.spdk:cnode17845", 01:07:05.617 "model_number": "h;+76?\u007fkbgOyOI3D#5D8n@:5}1T,!x@g*R$TW|u@S" 01:07:05.617 } 01:07:05.617 } 01:07:05.617 Got JSON-RPC error response 01:07:05.617 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 01:07:05.617 11:04:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 01:07:05.875 [2024-07-22 11:04:11.070399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:06.133 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 01:07:06.390 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 01:07:06.390 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 01:07:06.390 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 01:07:06.390 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 01:07:06.390 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 01:07:06.655 [2024-07-22 11:04:11.664082] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 01:07:06.655 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/22 11:04:11 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 01:07:06.655 request: 01:07:06.655 { 01:07:06.655 "method": "nvmf_subsystem_remove_listener", 01:07:06.655 "params": { 01:07:06.655 "nqn": "nqn.2016-06.io.spdk:cnode", 01:07:06.655 "listen_address": { 01:07:06.655 "trtype": "tcp", 01:07:06.655 "traddr": "", 01:07:06.655 "trsvcid": "4421" 01:07:06.655 } 01:07:06.655 } 01:07:06.655 } 01:07:06.655 Got JSON-RPC error response 01:07:06.655 GoRPCClient: error on JSON-RPC call' 01:07:06.655 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/22 11:04:11 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 01:07:06.655 request: 01:07:06.655 { 01:07:06.655 "method": "nvmf_subsystem_remove_listener", 01:07:06.655 "params": { 01:07:06.655 "nqn": "nqn.2016-06.io.spdk:cnode", 01:07:06.655 "listen_address": { 01:07:06.655 "trtype": "tcp", 01:07:06.655 "traddr": "", 01:07:06.655 "trsvcid": "4421" 01:07:06.655 } 01:07:06.655 } 01:07:06.655 } 01:07:06.655 Got JSON-RPC error response 01:07:06.655 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 01:07:06.655 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22062 -i 0 01:07:06.928 [2024-07-22 11:04:11.916301] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22062: invalid cntlid range [0-65519] 01:07:06.928 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/22 11:04:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22062], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 01:07:06.928 request: 01:07:06.928 { 01:07:06.928 "method": "nvmf_create_subsystem", 01:07:06.928 "params": { 01:07:06.928 "nqn": "nqn.2016-06.io.spdk:cnode22062", 01:07:06.928 "min_cntlid": 0 01:07:06.928 } 01:07:06.928 } 01:07:06.928 Got JSON-RPC error response 01:07:06.928 GoRPCClient: error on JSON-RPC call' 01:07:06.928 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/22 11:04:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22062], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 01:07:06.928 request: 01:07:06.928 { 01:07:06.928 "method": "nvmf_create_subsystem", 01:07:06.928 "params": { 01:07:06.928 "nqn": "nqn.2016-06.io.spdk:cnode22062", 01:07:06.928 "min_cntlid": 0 01:07:06.928 } 01:07:06.928 } 01:07:06.928 Got JSON-RPC error response 01:07:06.928 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:07:06.928 11:04:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18264 -i 65520 01:07:07.186 [2024-07-22 11:04:12.212567] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18264: invalid cntlid range [65520-65519] 01:07:07.186 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/22 11:04:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18264], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 01:07:07.186 request: 01:07:07.186 { 01:07:07.186 "method": "nvmf_create_subsystem", 01:07:07.186 "params": { 01:07:07.186 "nqn": "nqn.2016-06.io.spdk:cnode18264", 01:07:07.186 "min_cntlid": 65520 01:07:07.186 } 01:07:07.186 } 01:07:07.186 Got JSON-RPC error response 01:07:07.186 GoRPCClient: error on JSON-RPC call' 01:07:07.186 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/22 11:04:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18264], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 01:07:07.186 request: 01:07:07.186 { 01:07:07.186 "method": "nvmf_create_subsystem", 01:07:07.186 "params": { 01:07:07.186 "nqn": "nqn.2016-06.io.spdk:cnode18264", 01:07:07.186 "min_cntlid": 65520 01:07:07.186 } 01:07:07.186 } 01:07:07.186 Got JSON-RPC error response 01:07:07.186 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:07:07.186 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7247 -I 0 01:07:07.443 [2024-07-22 11:04:12.464936] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7247: invalid cntlid range [1-0] 01:07:07.443 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/22 11:04:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7247], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 01:07:07.443 request: 01:07:07.443 { 01:07:07.443 "method": "nvmf_create_subsystem", 01:07:07.443 "params": { 01:07:07.443 "nqn": "nqn.2016-06.io.spdk:cnode7247", 01:07:07.443 "max_cntlid": 0 01:07:07.443 } 01:07:07.443 } 01:07:07.443 Got JSON-RPC error response 01:07:07.443 GoRPCClient: error on JSON-RPC call' 01:07:07.443 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/22 11:04:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7247], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 01:07:07.443 request: 01:07:07.443 { 01:07:07.443 "method": "nvmf_create_subsystem", 01:07:07.443 "params": { 01:07:07.443 "nqn": "nqn.2016-06.io.spdk:cnode7247", 01:07:07.443 "max_cntlid": 0 01:07:07.443 } 01:07:07.443 } 01:07:07.443 Got JSON-RPC error response 01:07:07.443 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:07:07.443 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4425 -I 65520 01:07:07.701 [2024-07-22 11:04:12.753308] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4425: invalid cntlid range [1-65520] 01:07:07.701 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/22 11:04:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4425], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 01:07:07.701 request: 01:07:07.701 { 01:07:07.701 "method": "nvmf_create_subsystem", 01:07:07.701 "params": { 01:07:07.701 "nqn": "nqn.2016-06.io.spdk:cnode4425", 01:07:07.701 "max_cntlid": 65520 01:07:07.701 } 01:07:07.701 } 01:07:07.701 Got JSON-RPC error response 01:07:07.701 GoRPCClient: error on JSON-RPC call' 01:07:07.701 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/22 11:04:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4425], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 01:07:07.701 request: 01:07:07.701 { 01:07:07.701 "method": "nvmf_create_subsystem", 01:07:07.701 "params": { 01:07:07.701 "nqn": "nqn.2016-06.io.spdk:cnode4425", 01:07:07.701 "max_cntlid": 65520 01:07:07.701 } 01:07:07.701 } 01:07:07.701 Got JSON-RPC error response 01:07:07.701 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:07:07.701 11:04:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1964 -i 6 -I 5 01:07:07.959 [2024-07-22 11:04:13.001605] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1964: invalid cntlid range [6-5] 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/22 11:04:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode1964], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 01:07:07.959 request: 01:07:07.959 { 01:07:07.959 "method": "nvmf_create_subsystem", 01:07:07.959 "params": { 01:07:07.959 "nqn": "nqn.2016-06.io.spdk:cnode1964", 01:07:07.959 "min_cntlid": 6, 01:07:07.959 "max_cntlid": 5 01:07:07.959 } 01:07:07.959 } 01:07:07.959 Got JSON-RPC error response 01:07:07.959 GoRPCClient: error on JSON-RPC call' 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/22 11:04:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode1964], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 01:07:07.959 request: 01:07:07.959 { 01:07:07.959 "method": "nvmf_create_subsystem", 01:07:07.959 "params": { 01:07:07.959 "nqn": "nqn.2016-06.io.spdk:cnode1964", 01:07:07.959 "min_cntlid": 6, 01:07:07.959 "max_cntlid": 5 01:07:07.959 } 01:07:07.959 } 01:07:07.959 Got JSON-RPC error response 01:07:07.959 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 01:07:07.959 { 01:07:07.959 "name": "foobar", 01:07:07.959 "method": "nvmf_delete_target", 01:07:07.959 "req_id": 1 01:07:07.959 } 01:07:07.959 Got JSON-RPC error response 01:07:07.959 response: 01:07:07.959 { 01:07:07.959 "code": -32602, 01:07:07.959 "message": "The specified target doesn'\''t exist, cannot delete it." 01:07:07.959 }' 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 01:07:07.959 { 01:07:07.959 "name": "foobar", 01:07:07.959 "method": "nvmf_delete_target", 01:07:07.959 "req_id": 1 01:07:07.959 } 01:07:07.959 Got JSON-RPC error response 01:07:07.959 response: 01:07:07.959 { 01:07:07.959 "code": -32602, 01:07:07.959 "message": "The specified target doesn't exist, cannot delete it." 01:07:07.959 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:07.959 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:08.216 rmmod nvme_tcp 01:07:08.216 rmmod nvme_fabrics 01:07:08.216 rmmod nvme_keyring 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 83889 ']' 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 83889 01:07:08.216 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 83889 ']' 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 83889 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83889 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:07:08.217 killing process with pid 83889 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83889' 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 83889 01:07:08.217 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 83889 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:08.475 01:07:08.475 real 0m6.011s 01:07:08.475 user 0m24.113s 01:07:08.475 sys 0m1.362s 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:08.475 11:04:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:07:08.475 ************************************ 01:07:08.475 END TEST nvmf_invalid 01:07:08.475 ************************************ 01:07:08.475 11:04:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:08.475 11:04:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 01:07:08.475 11:04:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:08.475 11:04:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:08.475 11:04:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:08.475 ************************************ 01:07:08.475 START TEST nvmf_abort 01:07:08.475 ************************************ 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 01:07:08.475 * Looking for test storage... 01:07:08.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:08.475 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:08.733 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:08.734 Cannot find device "nvmf_tgt_br" 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:08.734 Cannot find device "nvmf_tgt_br2" 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:08.734 Cannot find device "nvmf_tgt_br" 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:08.734 Cannot find device "nvmf_tgt_br2" 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:08.734 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:08.734 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:08.734 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:08.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:08.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 01:07:08.992 01:07:08.992 --- 10.0.0.2 ping statistics --- 01:07:08.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:08.992 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:08.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:08.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 01:07:08.992 01:07:08.992 --- 10.0.0.3 ping statistics --- 01:07:08.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:08.992 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:08.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:08.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 01:07:08.992 01:07:08.992 --- 10.0.0.1 ping statistics --- 01:07:08.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:08.992 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=84395 01:07:08.992 11:04:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 84395 01:07:08.992 11:04:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 84395 ']' 01:07:08.992 11:04:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:08.992 11:04:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:08.992 11:04:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:08.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:08.992 11:04:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:08.992 11:04:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:07:08.992 11:04:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:08.992 [2024-07-22 11:04:14.058143] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:08.992 [2024-07-22 11:04:14.058227] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:09.251 [2024-07-22 11:04:14.200229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:07:09.251 [2024-07-22 11:04:14.297516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:09.251 [2024-07-22 11:04:14.297788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:09.251 [2024-07-22 11:04:14.297968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:09.251 [2024-07-22 11:04:14.298096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:09.251 [2024-07-22 11:04:14.298146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:09.251 [2024-07-22 11:04:14.298538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:07:09.251 [2024-07-22 11:04:14.298685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:07:09.251 [2024-07-22 11:04:14.298821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 [2024-07-22 11:04:15.115089] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 Malloc0 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 Delay0 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 [2024-07-22 11:04:15.192108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:10.183 11:04:15 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 01:07:10.183 [2024-07-22 11:04:15.382731] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:07:12.709 Initializing NVMe Controllers 01:07:12.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 01:07:12.709 controller IO queue size 128 less than required 01:07:12.709 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 01:07:12.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 01:07:12.709 Initialization complete. Launching workers. 01:07:12.709 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30875 01:07:12.709 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30936, failed to submit 62 01:07:12.709 success 30879, unsuccess 57, failed 0 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:12.709 rmmod nvme_tcp 01:07:12.709 rmmod nvme_fabrics 01:07:12.709 rmmod nvme_keyring 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 84395 ']' 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 84395 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 84395 ']' 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 84395 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84395 01:07:12.709 killing process with pid 84395 01:07:12.709 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84395' 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 84395 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 84395 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:12.710 01:07:12.710 real 0m4.295s 01:07:12.710 user 0m12.437s 01:07:12.710 sys 0m1.133s 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:12.710 11:04:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:07:12.710 ************************************ 01:07:12.710 END TEST nvmf_abort 01:07:12.710 ************************************ 01:07:12.710 11:04:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:12.710 11:04:17 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 01:07:12.710 11:04:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:12.710 11:04:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:12.968 11:04:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:12.968 ************************************ 01:07:12.968 START TEST nvmf_ns_hotplug_stress 01:07:12.968 ************************************ 01:07:12.968 11:04:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 01:07:12.968 * Looking for test storage... 01:07:12.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:12.968 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:12.969 Cannot find device "nvmf_tgt_br" 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:12.969 Cannot find device "nvmf_tgt_br2" 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:12.969 Cannot find device "nvmf_tgt_br" 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:12.969 Cannot find device "nvmf_tgt_br2" 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:12.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 01:07:12.969 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:13.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:13.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:13.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 01:07:13.228 01:07:13.228 --- 10.0.0.2 ping statistics --- 01:07:13.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:13.228 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:13.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:13.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 01:07:13.228 01:07:13.228 --- 10.0.0.3 ping statistics --- 01:07:13.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:13.228 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:13.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:13.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:07:13.228 01:07:13.228 --- 10.0.0.1 ping statistics --- 01:07:13.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:13.228 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=84658 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 84658 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 84658 ']' 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:13.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:13.228 11:04:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:07:13.487 [2024-07-22 11:04:18.438866] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:13.487 [2024-07-22 11:04:18.438949] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:13.487 [2024-07-22 11:04:18.582737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:07:13.487 [2024-07-22 11:04:18.687550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:13.487 [2024-07-22 11:04:18.687617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:13.487 [2024-07-22 11:04:18.687648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:13.487 [2024-07-22 11:04:18.687661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:13.487 [2024-07-22 11:04:18.687676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:13.487 [2024-07-22 11:04:18.687875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:07:13.487 [2024-07-22 11:04:18.687992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:07:13.487 [2024-07-22 11:04:18.687998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 01:07:14.419 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:07:14.677 [2024-07-22 11:04:19.660944] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:14.677 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:07:14.934 11:04:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:15.193 [2024-07-22 11:04:20.193756] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:15.193 11:04:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:07:15.450 11:04:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 01:07:15.708 Malloc0 01:07:15.708 11:04:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:07:15.966 Delay0 01:07:15.966 11:04:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:16.223 11:04:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 01:07:16.481 NULL1 01:07:16.481 11:04:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 01:07:16.738 11:04:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=84789 01:07:16.739 11:04:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 01:07:16.739 11:04:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:16.739 11:04:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:18.136 Read completed with error (sct=0, sc=11) 01:07:18.136 11:04:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:18.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:18.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:18.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:18.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:18.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:18.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:18.136 11:04:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 01:07:18.136 11:04:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 01:07:18.393 true 01:07:18.393 11:04:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:18.393 11:04:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:19.325 11:04:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:19.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:19.325 11:04:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 01:07:19.325 11:04:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 01:07:19.584 true 01:07:19.584 11:04:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:19.584 11:04:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:19.842 11:04:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:20.408 11:04:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 01:07:20.408 11:04:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 01:07:20.408 true 01:07:20.408 11:04:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:20.408 11:04:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:20.666 11:04:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:20.924 11:04:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 01:07:20.924 11:04:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 01:07:21.182 true 01:07:21.182 11:04:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:21.182 11:04:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:22.116 11:04:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:22.374 11:04:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 01:07:22.374 11:04:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 01:07:22.633 true 01:07:22.633 11:04:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:22.633 11:04:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:22.892 11:04:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:23.150 11:04:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 01:07:23.150 11:04:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 01:07:23.409 true 01:07:23.409 11:04:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:23.409 11:04:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:23.667 11:04:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:23.926 11:04:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 01:07:23.926 11:04:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 01:07:24.184 true 01:07:24.184 11:04:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:24.184 11:04:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:25.121 11:04:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:25.379 11:04:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 01:07:25.379 11:04:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 01:07:25.636 true 01:07:25.636 11:04:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:25.636 11:04:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:25.893 11:04:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:26.150 11:04:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 01:07:26.150 11:04:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 01:07:26.407 true 01:07:26.407 11:04:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:26.407 11:04:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:26.665 11:04:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:26.922 11:04:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 01:07:26.922 11:04:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 01:07:27.179 true 01:07:27.179 11:04:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:27.179 11:04:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:28.112 11:04:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:28.678 11:04:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 01:07:28.678 11:04:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 01:07:28.678 true 01:07:28.678 11:04:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:28.678 11:04:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:29.244 11:04:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:29.503 11:04:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 01:07:29.503 11:04:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 01:07:29.761 true 01:07:29.761 11:04:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:29.761 11:04:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:30.020 11:04:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:30.279 11:04:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 01:07:30.279 11:04:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 01:07:30.538 true 01:07:30.538 11:04:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:30.538 11:04:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:31.105 11:04:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:31.673 11:04:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 01:07:31.673 11:04:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 01:07:31.673 true 01:07:31.673 11:04:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:31.673 11:04:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:31.931 11:04:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:32.498 11:04:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 01:07:32.498 11:04:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 01:07:32.498 true 01:07:32.498 11:04:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:32.498 11:04:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:33.066 11:04:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:33.066 11:04:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 01:07:33.066 11:04:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 01:07:33.324 true 01:07:33.324 11:04:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:33.324 11:04:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:34.261 11:04:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:34.520 11:04:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 01:07:34.520 11:04:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 01:07:34.779 true 01:07:34.779 11:04:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:34.779 11:04:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:35.038 11:04:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:35.295 11:04:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 01:07:35.295 11:04:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 01:07:35.553 true 01:07:35.553 11:04:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:35.553 11:04:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:36.488 11:04:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:36.488 11:04:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 01:07:36.488 11:04:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 01:07:36.746 true 01:07:36.746 11:04:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:36.746 11:04:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:37.005 11:04:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:37.267 11:04:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 01:07:37.267 11:04:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 01:07:37.525 true 01:07:37.525 11:04:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:37.525 11:04:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:38.461 11:04:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:38.461 11:04:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 01:07:38.461 11:04:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 01:07:38.719 true 01:07:38.719 11:04:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:38.719 11:04:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:38.978 11:04:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:39.242 11:04:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 01:07:39.242 11:04:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 01:07:39.511 true 01:07:39.511 11:04:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:39.511 11:04:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:40.446 11:04:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:40.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:07:40.705 11:04:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 01:07:40.705 11:04:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 01:07:40.705 true 01:07:40.964 11:04:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:40.964 11:04:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:40.964 11:04:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:41.222 11:04:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 01:07:41.222 11:04:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 01:07:41.481 true 01:07:41.481 11:04:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:41.481 11:04:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:42.046 11:04:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:42.303 11:04:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 01:07:42.303 11:04:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 01:07:42.303 true 01:07:42.303 11:04:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:42.303 11:04:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:43.236 11:04:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:43.494 11:04:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 01:07:43.494 11:04:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 01:07:43.753 true 01:07:43.753 11:04:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:43.753 11:04:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:44.011 11:04:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:44.269 11:04:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 01:07:44.269 11:04:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 01:07:44.527 true 01:07:44.527 11:04:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:44.527 11:04:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:44.785 11:04:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:45.042 11:04:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 01:07:45.042 11:04:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 01:07:45.300 true 01:07:45.300 11:04:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:45.300 11:04:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:46.232 11:04:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:07:46.488 11:04:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 01:07:46.488 11:04:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 01:07:46.745 Initializing NVMe Controllers 01:07:46.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:07:46.745 Controller IO queue size 128, less than required. 01:07:46.745 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:07:46.745 Controller IO queue size 128, less than required. 01:07:46.745 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:07:46.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:07:46.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:07:46.745 Initialization complete. Launching workers. 01:07:46.745 ======================================================== 01:07:46.745 Latency(us) 01:07:46.745 Device Information : IOPS MiB/s Average min max 01:07:46.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 332.06 0.16 168168.34 3379.98 1018961.01 01:07:46.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8605.19 4.20 14874.25 1971.11 572033.15 01:07:46.745 ======================================================== 01:07:46.745 Total : 8937.25 4.36 20569.87 1971.11 1018961.01 01:07:46.745 01:07:46.745 true 01:07:47.002 11:04:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84789 01:07:47.002 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (84789) - No such process 01:07:47.002 11:04:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 84789 01:07:47.002 11:04:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:47.259 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 01:07:47.516 null0 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:47.516 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 01:07:47.773 null1 01:07:48.031 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:48.031 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:48.031 11:04:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 01:07:48.031 null2 01:07:48.031 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:48.031 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:48.031 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 01:07:48.288 null3 01:07:48.545 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:48.545 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:48.545 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 01:07:48.545 null4 01:07:48.545 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:48.545 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:48.545 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 01:07:48.803 null5 01:07:48.803 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:48.803 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:48.803 11:04:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 01:07:49.368 null6 01:07:49.368 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:49.368 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:49.368 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 01:07:49.368 null7 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 85814 85815 85818 85820 85822 85823 85826 85827 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:49.637 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:49.932 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:49.932 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:49.932 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:49.932 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:49.932 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:49.932 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:49.932 11:04:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:49.932 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:50.205 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.462 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:50.720 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:50.979 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:50.979 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:50.979 11:04:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:50.979 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:51.238 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.496 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:51.754 11:04:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:52.012 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.270 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.271 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:52.271 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:52.271 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.271 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.271 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:52.529 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:52.787 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:53.045 11:04:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:53.045 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:53.045 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:53.045 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:53.045 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:53.045 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:53.045 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.302 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:53.560 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:53.818 11:04:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:54.077 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.335 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.593 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:07:54.852 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:07:54.852 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.852 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.852 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:07:54.852 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.853 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.853 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.853 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.853 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.853 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:54.853 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:54.853 11:04:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:07:55.111 rmmod nvme_tcp 01:07:55.111 rmmod nvme_fabrics 01:07:55.111 rmmod nvme_keyring 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 84658 ']' 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 84658 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 84658 ']' 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 84658 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84658 01:07:55.111 killing process with pid 84658 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84658' 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 84658 01:07:55.111 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 84658 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:55.369 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:55.627 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:07:55.627 ************************************ 01:07:55.627 END TEST nvmf_ns_hotplug_stress 01:07:55.627 ************************************ 01:07:55.627 01:07:55.627 real 0m42.679s 01:07:55.627 user 3m25.415s 01:07:55.627 sys 0m12.906s 01:07:55.627 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 01:07:55.627 11:05:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:07:55.627 11:05:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:07:55.627 11:05:00 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 01:07:55.627 11:05:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:07:55.627 11:05:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:07:55.627 11:05:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:07:55.627 ************************************ 01:07:55.627 START TEST nvmf_connect_stress 01:07:55.627 ************************************ 01:07:55.627 11:05:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 01:07:55.628 * Looking for test storage... 01:07:55.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:07:55.628 Cannot find device "nvmf_tgt_br" 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:07:55.628 Cannot find device "nvmf_tgt_br2" 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:07:55.628 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:07:55.886 Cannot find device "nvmf_tgt_br" 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:07:55.886 Cannot find device "nvmf_tgt_br2" 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:55.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:55.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:55.886 11:05:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:55.886 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:07:56.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:56.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 01:07:56.144 01:07:56.144 --- 10.0.0.2 ping statistics --- 01:07:56.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:56.144 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:07:56.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:56.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 01:07:56.144 01:07:56.144 --- 10.0.0.3 ping statistics --- 01:07:56.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:56.144 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:56.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:56.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:07:56.144 01:07:56.144 --- 10.0.0.1 ping statistics --- 01:07:56.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:56.144 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=87125 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 87125 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 87125 ']' 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 01:07:56.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 01:07:56.144 11:05:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:56.144 [2024-07-22 11:05:01.266761] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:07:56.144 [2024-07-22 11:05:01.266843] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:56.402 [2024-07-22 11:05:01.411239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:07:56.402 [2024-07-22 11:05:01.504204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:56.402 [2024-07-22 11:05:01.504258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:56.402 [2024-07-22 11:05:01.504280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:56.402 [2024-07-22 11:05:01.504291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:56.402 [2024-07-22 11:05:01.504300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:56.402 [2024-07-22 11:05:01.505157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:07:56.402 [2024-07-22 11:05:01.505302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:07:56.402 [2024-07-22 11:05:01.505308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:57.336 [2024-07-22 11:05:02.302512] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:57.336 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:57.337 [2024-07-22 11:05:02.330747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:57.337 NULL1 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=87177 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:57.337 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:57.595 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:57.595 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:57.595 11:05:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:57.595 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:57.595 11:05:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:58.160 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:58.160 11:05:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:58.160 11:05:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:58.160 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:58.160 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:58.418 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:58.418 11:05:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:58.418 11:05:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:58.418 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:58.418 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:58.676 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:58.676 11:05:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:58.676 11:05:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:58.676 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:58.676 11:05:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:58.934 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:58.934 11:05:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:58.934 11:05:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:58.934 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:58.934 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:59.192 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:59.192 11:05:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:59.192 11:05:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:59.192 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:59.192 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:07:59.758 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:07:59.758 11:05:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:07:59.758 11:05:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:07:59.758 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:07:59.758 11:05:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:00.096 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:00.096 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:00.096 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:00.096 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:00.096 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:00.360 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:00.360 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:00.360 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:00.360 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:00.360 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:00.617 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:00.617 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:00.617 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:00.617 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:00.617 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:00.874 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:00.875 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:00.875 11:05:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:00.875 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:00.875 11:05:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:01.146 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:01.146 11:05:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:01.146 11:05:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:01.146 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:01.146 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:01.712 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:01.712 11:05:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:01.712 11:05:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:01.712 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:01.712 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:01.970 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:01.970 11:05:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:01.970 11:05:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:01.970 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:01.970 11:05:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:02.228 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:02.228 11:05:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:02.228 11:05:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:02.228 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:02.228 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:02.487 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:02.487 11:05:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:02.487 11:05:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:02.487 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:02.487 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:02.746 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:02.746 11:05:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:02.746 11:05:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:02.746 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:02.746 11:05:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:03.314 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:03.314 11:05:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:03.314 11:05:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:03.314 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:03.314 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:03.572 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:03.572 11:05:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:03.572 11:05:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:03.572 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:03.572 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:03.831 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:03.831 11:05:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:03.831 11:05:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:03.831 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:03.831 11:05:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:04.089 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:04.089 11:05:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:04.089 11:05:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:04.089 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:04.089 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:04.656 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:04.656 11:05:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:04.656 11:05:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:04.656 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:04.656 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:04.915 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:04.915 11:05:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:04.915 11:05:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:04.915 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:04.915 11:05:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:05.173 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:05.173 11:05:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:05.173 11:05:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:05.173 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:05.173 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:05.431 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:05.431 11:05:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:05.431 11:05:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:05.432 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:05.432 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:05.690 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:05.690 11:05:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:05.690 11:05:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:05.690 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:05.690 11:05:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:06.258 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:06.258 11:05:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:06.258 11:05:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:06.258 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:06.258 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:06.516 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:06.516 11:05:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:06.516 11:05:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:06.516 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:06.516 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:06.774 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:06.774 11:05:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:06.774 11:05:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:06.774 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:06.774 11:05:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:07.033 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:07.033 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:07.033 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:07.033 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:07.033 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:07.291 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:07.291 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:07.291 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:08:07.291 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:07.291 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:07.549 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87177 01:08:07.808 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (87177) - No such process 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 87177 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:07.808 rmmod nvme_tcp 01:08:07.808 rmmod nvme_fabrics 01:08:07.808 rmmod nvme_keyring 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 87125 ']' 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 87125 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 87125 ']' 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 87125 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87125 01:08:07.808 killing process with pid 87125 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87125' 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 87125 01:08:07.808 11:05:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 87125 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:08.066 11:05:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:08.325 11:05:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:08.325 01:08:08.325 real 0m12.624s 01:08:08.325 user 0m41.683s 01:08:08.325 sys 0m3.278s 01:08:08.325 11:05:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:08.325 11:05:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:08:08.325 ************************************ 01:08:08.325 END TEST nvmf_connect_stress 01:08:08.325 ************************************ 01:08:08.325 11:05:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:08.325 11:05:13 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 01:08:08.325 11:05:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:08.325 11:05:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:08.325 11:05:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:08.325 ************************************ 01:08:08.325 START TEST nvmf_fused_ordering 01:08:08.325 ************************************ 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 01:08:08.325 * Looking for test storage... 01:08:08.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:08.325 Cannot find device "nvmf_tgt_br" 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:08.325 Cannot find device "nvmf_tgt_br2" 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:08.325 Cannot find device "nvmf_tgt_br" 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:08.325 Cannot find device "nvmf_tgt_br2" 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 01:08:08.325 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:08.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:08.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:08.583 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:08.584 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:08.841 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:08.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:08.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 01:08:08.841 01:08:08.841 --- 10.0.0.2 ping statistics --- 01:08:08.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:08.841 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:08:08.841 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:08.841 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:08.841 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 01:08:08.841 01:08:08.841 --- 10.0.0.3 ping statistics --- 01:08:08.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:08.841 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:08:08.841 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:08.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:08.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:08:08.841 01:08:08.841 --- 10.0.0.1 ping statistics --- 01:08:08.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:08.842 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=87511 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 87511 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 87511 ']' 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:08.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:08.842 11:05:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:08.842 [2024-07-22 11:05:13.903501] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:08.842 [2024-07-22 11:05:13.903579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:08.842 [2024-07-22 11:05:14.047690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:09.100 [2024-07-22 11:05:14.140685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:09.100 [2024-07-22 11:05:14.140757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:09.100 [2024-07-22 11:05:14.140778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:09.100 [2024-07-22 11:05:14.140789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:09.100 [2024-07-22 11:05:14.140800] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:09.100 [2024-07-22 11:05:14.140835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:08:10.033 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:10.033 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:10.034 [2024-07-22 11:05:14.960412] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:10.034 [2024-07-22 11:05:14.976484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:10.034 NULL1 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:10.034 11:05:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:10.034 11:05:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:10.034 11:05:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:08:10.034 [2024-07-22 11:05:15.028016] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:10.034 [2024-07-22 11:05:15.028055] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87561 ] 01:08:10.293 Attached to nqn.2016-06.io.spdk:cnode1 01:08:10.293 Namespace ID: 1 size: 1GB 01:08:10.293 fused_ordering(0) 01:08:10.293 fused_ordering(1) 01:08:10.293 fused_ordering(2) 01:08:10.293 fused_ordering(3) 01:08:10.293 fused_ordering(4) 01:08:10.293 fused_ordering(5) 01:08:10.293 fused_ordering(6) 01:08:10.293 fused_ordering(7) 01:08:10.293 fused_ordering(8) 01:08:10.293 fused_ordering(9) 01:08:10.293 fused_ordering(10) 01:08:10.293 fused_ordering(11) 01:08:10.293 fused_ordering(12) 01:08:10.293 fused_ordering(13) 01:08:10.293 fused_ordering(14) 01:08:10.293 fused_ordering(15) 01:08:10.293 fused_ordering(16) 01:08:10.293 fused_ordering(17) 01:08:10.293 fused_ordering(18) 01:08:10.293 fused_ordering(19) 01:08:10.293 fused_ordering(20) 01:08:10.293 fused_ordering(21) 01:08:10.293 fused_ordering(22) 01:08:10.293 fused_ordering(23) 01:08:10.293 fused_ordering(24) 01:08:10.293 fused_ordering(25) 01:08:10.293 fused_ordering(26) 01:08:10.293 fused_ordering(27) 01:08:10.293 fused_ordering(28) 01:08:10.293 fused_ordering(29) 01:08:10.293 fused_ordering(30) 01:08:10.293 fused_ordering(31) 01:08:10.293 fused_ordering(32) 01:08:10.293 fused_ordering(33) 01:08:10.293 fused_ordering(34) 01:08:10.293 fused_ordering(35) 01:08:10.294 fused_ordering(36) 01:08:10.294 fused_ordering(37) 01:08:10.294 fused_ordering(38) 01:08:10.294 fused_ordering(39) 01:08:10.294 fused_ordering(40) 01:08:10.294 fused_ordering(41) 01:08:10.294 fused_ordering(42) 01:08:10.294 fused_ordering(43) 01:08:10.294 fused_ordering(44) 01:08:10.294 fused_ordering(45) 01:08:10.294 fused_ordering(46) 01:08:10.294 fused_ordering(47) 01:08:10.294 fused_ordering(48) 01:08:10.294 fused_ordering(49) 01:08:10.294 fused_ordering(50) 01:08:10.294 fused_ordering(51) 01:08:10.294 fused_ordering(52) 01:08:10.294 fused_ordering(53) 01:08:10.294 fused_ordering(54) 01:08:10.294 fused_ordering(55) 01:08:10.294 fused_ordering(56) 01:08:10.294 fused_ordering(57) 01:08:10.294 fused_ordering(58) 01:08:10.294 fused_ordering(59) 01:08:10.294 fused_ordering(60) 01:08:10.294 fused_ordering(61) 01:08:10.294 fused_ordering(62) 01:08:10.294 fused_ordering(63) 01:08:10.294 fused_ordering(64) 01:08:10.294 fused_ordering(65) 01:08:10.294 fused_ordering(66) 01:08:10.294 fused_ordering(67) 01:08:10.294 fused_ordering(68) 01:08:10.294 fused_ordering(69) 01:08:10.294 fused_ordering(70) 01:08:10.294 fused_ordering(71) 01:08:10.294 fused_ordering(72) 01:08:10.294 fused_ordering(73) 01:08:10.294 fused_ordering(74) 01:08:10.294 fused_ordering(75) 01:08:10.294 fused_ordering(76) 01:08:10.294 fused_ordering(77) 01:08:10.294 fused_ordering(78) 01:08:10.294 fused_ordering(79) 01:08:10.294 fused_ordering(80) 01:08:10.294 fused_ordering(81) 01:08:10.294 fused_ordering(82) 01:08:10.294 fused_ordering(83) 01:08:10.294 fused_ordering(84) 01:08:10.294 fused_ordering(85) 01:08:10.294 fused_ordering(86) 01:08:10.294 fused_ordering(87) 01:08:10.294 fused_ordering(88) 01:08:10.294 fused_ordering(89) 01:08:10.294 fused_ordering(90) 01:08:10.294 fused_ordering(91) 01:08:10.294 fused_ordering(92) 01:08:10.294 fused_ordering(93) 01:08:10.294 fused_ordering(94) 01:08:10.294 fused_ordering(95) 01:08:10.294 fused_ordering(96) 01:08:10.294 fused_ordering(97) 01:08:10.294 fused_ordering(98) 01:08:10.294 fused_ordering(99) 01:08:10.294 fused_ordering(100) 01:08:10.294 fused_ordering(101) 01:08:10.294 fused_ordering(102) 01:08:10.294 fused_ordering(103) 01:08:10.294 fused_ordering(104) 01:08:10.294 fused_ordering(105) 01:08:10.294 fused_ordering(106) 01:08:10.294 fused_ordering(107) 01:08:10.294 fused_ordering(108) 01:08:10.294 fused_ordering(109) 01:08:10.294 fused_ordering(110) 01:08:10.294 fused_ordering(111) 01:08:10.294 fused_ordering(112) 01:08:10.294 fused_ordering(113) 01:08:10.294 fused_ordering(114) 01:08:10.294 fused_ordering(115) 01:08:10.294 fused_ordering(116) 01:08:10.294 fused_ordering(117) 01:08:10.294 fused_ordering(118) 01:08:10.294 fused_ordering(119) 01:08:10.294 fused_ordering(120) 01:08:10.294 fused_ordering(121) 01:08:10.294 fused_ordering(122) 01:08:10.294 fused_ordering(123) 01:08:10.294 fused_ordering(124) 01:08:10.294 fused_ordering(125) 01:08:10.294 fused_ordering(126) 01:08:10.294 fused_ordering(127) 01:08:10.294 fused_ordering(128) 01:08:10.294 fused_ordering(129) 01:08:10.294 fused_ordering(130) 01:08:10.294 fused_ordering(131) 01:08:10.294 fused_ordering(132) 01:08:10.294 fused_ordering(133) 01:08:10.294 fused_ordering(134) 01:08:10.294 fused_ordering(135) 01:08:10.294 fused_ordering(136) 01:08:10.294 fused_ordering(137) 01:08:10.294 fused_ordering(138) 01:08:10.294 fused_ordering(139) 01:08:10.294 fused_ordering(140) 01:08:10.294 fused_ordering(141) 01:08:10.294 fused_ordering(142) 01:08:10.294 fused_ordering(143) 01:08:10.294 fused_ordering(144) 01:08:10.294 fused_ordering(145) 01:08:10.294 fused_ordering(146) 01:08:10.294 fused_ordering(147) 01:08:10.294 fused_ordering(148) 01:08:10.294 fused_ordering(149) 01:08:10.294 fused_ordering(150) 01:08:10.294 fused_ordering(151) 01:08:10.294 fused_ordering(152) 01:08:10.294 fused_ordering(153) 01:08:10.294 fused_ordering(154) 01:08:10.294 fused_ordering(155) 01:08:10.294 fused_ordering(156) 01:08:10.294 fused_ordering(157) 01:08:10.294 fused_ordering(158) 01:08:10.294 fused_ordering(159) 01:08:10.294 fused_ordering(160) 01:08:10.294 fused_ordering(161) 01:08:10.294 fused_ordering(162) 01:08:10.294 fused_ordering(163) 01:08:10.294 fused_ordering(164) 01:08:10.294 fused_ordering(165) 01:08:10.294 fused_ordering(166) 01:08:10.294 fused_ordering(167) 01:08:10.294 fused_ordering(168) 01:08:10.294 fused_ordering(169) 01:08:10.294 fused_ordering(170) 01:08:10.294 fused_ordering(171) 01:08:10.294 fused_ordering(172) 01:08:10.294 fused_ordering(173) 01:08:10.294 fused_ordering(174) 01:08:10.294 fused_ordering(175) 01:08:10.294 fused_ordering(176) 01:08:10.294 fused_ordering(177) 01:08:10.294 fused_ordering(178) 01:08:10.294 fused_ordering(179) 01:08:10.294 fused_ordering(180) 01:08:10.294 fused_ordering(181) 01:08:10.294 fused_ordering(182) 01:08:10.294 fused_ordering(183) 01:08:10.294 fused_ordering(184) 01:08:10.294 fused_ordering(185) 01:08:10.294 fused_ordering(186) 01:08:10.294 fused_ordering(187) 01:08:10.294 fused_ordering(188) 01:08:10.294 fused_ordering(189) 01:08:10.294 fused_ordering(190) 01:08:10.294 fused_ordering(191) 01:08:10.294 fused_ordering(192) 01:08:10.294 fused_ordering(193) 01:08:10.294 fused_ordering(194) 01:08:10.294 fused_ordering(195) 01:08:10.294 fused_ordering(196) 01:08:10.294 fused_ordering(197) 01:08:10.294 fused_ordering(198) 01:08:10.294 fused_ordering(199) 01:08:10.294 fused_ordering(200) 01:08:10.294 fused_ordering(201) 01:08:10.294 fused_ordering(202) 01:08:10.294 fused_ordering(203) 01:08:10.294 fused_ordering(204) 01:08:10.294 fused_ordering(205) 01:08:10.862 fused_ordering(206) 01:08:10.862 fused_ordering(207) 01:08:10.862 fused_ordering(208) 01:08:10.862 fused_ordering(209) 01:08:10.862 fused_ordering(210) 01:08:10.862 fused_ordering(211) 01:08:10.862 fused_ordering(212) 01:08:10.862 fused_ordering(213) 01:08:10.862 fused_ordering(214) 01:08:10.862 fused_ordering(215) 01:08:10.862 fused_ordering(216) 01:08:10.862 fused_ordering(217) 01:08:10.862 fused_ordering(218) 01:08:10.862 fused_ordering(219) 01:08:10.862 fused_ordering(220) 01:08:10.862 fused_ordering(221) 01:08:10.862 fused_ordering(222) 01:08:10.862 fused_ordering(223) 01:08:10.862 fused_ordering(224) 01:08:10.862 fused_ordering(225) 01:08:10.862 fused_ordering(226) 01:08:10.862 fused_ordering(227) 01:08:10.862 fused_ordering(228) 01:08:10.862 fused_ordering(229) 01:08:10.862 fused_ordering(230) 01:08:10.862 fused_ordering(231) 01:08:10.862 fused_ordering(232) 01:08:10.862 fused_ordering(233) 01:08:10.862 fused_ordering(234) 01:08:10.862 fused_ordering(235) 01:08:10.862 fused_ordering(236) 01:08:10.862 fused_ordering(237) 01:08:10.862 fused_ordering(238) 01:08:10.862 fused_ordering(239) 01:08:10.862 fused_ordering(240) 01:08:10.862 fused_ordering(241) 01:08:10.862 fused_ordering(242) 01:08:10.862 fused_ordering(243) 01:08:10.862 fused_ordering(244) 01:08:10.862 fused_ordering(245) 01:08:10.862 fused_ordering(246) 01:08:10.862 fused_ordering(247) 01:08:10.862 fused_ordering(248) 01:08:10.862 fused_ordering(249) 01:08:10.862 fused_ordering(250) 01:08:10.862 fused_ordering(251) 01:08:10.862 fused_ordering(252) 01:08:10.862 fused_ordering(253) 01:08:10.862 fused_ordering(254) 01:08:10.862 fused_ordering(255) 01:08:10.862 fused_ordering(256) 01:08:10.862 fused_ordering(257) 01:08:10.862 fused_ordering(258) 01:08:10.862 fused_ordering(259) 01:08:10.862 fused_ordering(260) 01:08:10.862 fused_ordering(261) 01:08:10.862 fused_ordering(262) 01:08:10.862 fused_ordering(263) 01:08:10.862 fused_ordering(264) 01:08:10.862 fused_ordering(265) 01:08:10.862 fused_ordering(266) 01:08:10.862 fused_ordering(267) 01:08:10.862 fused_ordering(268) 01:08:10.862 fused_ordering(269) 01:08:10.862 fused_ordering(270) 01:08:10.862 fused_ordering(271) 01:08:10.862 fused_ordering(272) 01:08:10.862 fused_ordering(273) 01:08:10.862 fused_ordering(274) 01:08:10.862 fused_ordering(275) 01:08:10.862 fused_ordering(276) 01:08:10.862 fused_ordering(277) 01:08:10.862 fused_ordering(278) 01:08:10.862 fused_ordering(279) 01:08:10.862 fused_ordering(280) 01:08:10.862 fused_ordering(281) 01:08:10.862 fused_ordering(282) 01:08:10.862 fused_ordering(283) 01:08:10.862 fused_ordering(284) 01:08:10.862 fused_ordering(285) 01:08:10.862 fused_ordering(286) 01:08:10.862 fused_ordering(287) 01:08:10.862 fused_ordering(288) 01:08:10.862 fused_ordering(289) 01:08:10.862 fused_ordering(290) 01:08:10.862 fused_ordering(291) 01:08:10.862 fused_ordering(292) 01:08:10.862 fused_ordering(293) 01:08:10.862 fused_ordering(294) 01:08:10.862 fused_ordering(295) 01:08:10.862 fused_ordering(296) 01:08:10.862 fused_ordering(297) 01:08:10.862 fused_ordering(298) 01:08:10.862 fused_ordering(299) 01:08:10.862 fused_ordering(300) 01:08:10.862 fused_ordering(301) 01:08:10.862 fused_ordering(302) 01:08:10.862 fused_ordering(303) 01:08:10.862 fused_ordering(304) 01:08:10.862 fused_ordering(305) 01:08:10.862 fused_ordering(306) 01:08:10.862 fused_ordering(307) 01:08:10.862 fused_ordering(308) 01:08:10.862 fused_ordering(309) 01:08:10.862 fused_ordering(310) 01:08:10.862 fused_ordering(311) 01:08:10.862 fused_ordering(312) 01:08:10.862 fused_ordering(313) 01:08:10.862 fused_ordering(314) 01:08:10.862 fused_ordering(315) 01:08:10.862 fused_ordering(316) 01:08:10.863 fused_ordering(317) 01:08:10.863 fused_ordering(318) 01:08:10.863 fused_ordering(319) 01:08:10.863 fused_ordering(320) 01:08:10.863 fused_ordering(321) 01:08:10.863 fused_ordering(322) 01:08:10.863 fused_ordering(323) 01:08:10.863 fused_ordering(324) 01:08:10.863 fused_ordering(325) 01:08:10.863 fused_ordering(326) 01:08:10.863 fused_ordering(327) 01:08:10.863 fused_ordering(328) 01:08:10.863 fused_ordering(329) 01:08:10.863 fused_ordering(330) 01:08:10.863 fused_ordering(331) 01:08:10.863 fused_ordering(332) 01:08:10.863 fused_ordering(333) 01:08:10.863 fused_ordering(334) 01:08:10.863 fused_ordering(335) 01:08:10.863 fused_ordering(336) 01:08:10.863 fused_ordering(337) 01:08:10.863 fused_ordering(338) 01:08:10.863 fused_ordering(339) 01:08:10.863 fused_ordering(340) 01:08:10.863 fused_ordering(341) 01:08:10.863 fused_ordering(342) 01:08:10.863 fused_ordering(343) 01:08:10.863 fused_ordering(344) 01:08:10.863 fused_ordering(345) 01:08:10.863 fused_ordering(346) 01:08:10.863 fused_ordering(347) 01:08:10.863 fused_ordering(348) 01:08:10.863 fused_ordering(349) 01:08:10.863 fused_ordering(350) 01:08:10.863 fused_ordering(351) 01:08:10.863 fused_ordering(352) 01:08:10.863 fused_ordering(353) 01:08:10.863 fused_ordering(354) 01:08:10.863 fused_ordering(355) 01:08:10.863 fused_ordering(356) 01:08:10.863 fused_ordering(357) 01:08:10.863 fused_ordering(358) 01:08:10.863 fused_ordering(359) 01:08:10.863 fused_ordering(360) 01:08:10.863 fused_ordering(361) 01:08:10.863 fused_ordering(362) 01:08:10.863 fused_ordering(363) 01:08:10.863 fused_ordering(364) 01:08:10.863 fused_ordering(365) 01:08:10.863 fused_ordering(366) 01:08:10.863 fused_ordering(367) 01:08:10.863 fused_ordering(368) 01:08:10.863 fused_ordering(369) 01:08:10.863 fused_ordering(370) 01:08:10.863 fused_ordering(371) 01:08:10.863 fused_ordering(372) 01:08:10.863 fused_ordering(373) 01:08:10.863 fused_ordering(374) 01:08:10.863 fused_ordering(375) 01:08:10.863 fused_ordering(376) 01:08:10.863 fused_ordering(377) 01:08:10.863 fused_ordering(378) 01:08:10.863 fused_ordering(379) 01:08:10.863 fused_ordering(380) 01:08:10.863 fused_ordering(381) 01:08:10.863 fused_ordering(382) 01:08:10.863 fused_ordering(383) 01:08:10.863 fused_ordering(384) 01:08:10.863 fused_ordering(385) 01:08:10.863 fused_ordering(386) 01:08:10.863 fused_ordering(387) 01:08:10.863 fused_ordering(388) 01:08:10.863 fused_ordering(389) 01:08:10.863 fused_ordering(390) 01:08:10.863 fused_ordering(391) 01:08:10.863 fused_ordering(392) 01:08:10.863 fused_ordering(393) 01:08:10.863 fused_ordering(394) 01:08:10.863 fused_ordering(395) 01:08:10.863 fused_ordering(396) 01:08:10.863 fused_ordering(397) 01:08:10.863 fused_ordering(398) 01:08:10.863 fused_ordering(399) 01:08:10.863 fused_ordering(400) 01:08:10.863 fused_ordering(401) 01:08:10.863 fused_ordering(402) 01:08:10.863 fused_ordering(403) 01:08:10.863 fused_ordering(404) 01:08:10.863 fused_ordering(405) 01:08:10.863 fused_ordering(406) 01:08:10.863 fused_ordering(407) 01:08:10.863 fused_ordering(408) 01:08:10.863 fused_ordering(409) 01:08:10.863 fused_ordering(410) 01:08:11.122 fused_ordering(411) 01:08:11.122 fused_ordering(412) 01:08:11.122 fused_ordering(413) 01:08:11.122 fused_ordering(414) 01:08:11.122 fused_ordering(415) 01:08:11.122 fused_ordering(416) 01:08:11.122 fused_ordering(417) 01:08:11.122 fused_ordering(418) 01:08:11.122 fused_ordering(419) 01:08:11.122 fused_ordering(420) 01:08:11.122 fused_ordering(421) 01:08:11.122 fused_ordering(422) 01:08:11.122 fused_ordering(423) 01:08:11.122 fused_ordering(424) 01:08:11.122 fused_ordering(425) 01:08:11.122 fused_ordering(426) 01:08:11.122 fused_ordering(427) 01:08:11.122 fused_ordering(428) 01:08:11.122 fused_ordering(429) 01:08:11.122 fused_ordering(430) 01:08:11.122 fused_ordering(431) 01:08:11.122 fused_ordering(432) 01:08:11.122 fused_ordering(433) 01:08:11.122 fused_ordering(434) 01:08:11.122 fused_ordering(435) 01:08:11.122 fused_ordering(436) 01:08:11.122 fused_ordering(437) 01:08:11.122 fused_ordering(438) 01:08:11.122 fused_ordering(439) 01:08:11.122 fused_ordering(440) 01:08:11.122 fused_ordering(441) 01:08:11.122 fused_ordering(442) 01:08:11.122 fused_ordering(443) 01:08:11.122 fused_ordering(444) 01:08:11.122 fused_ordering(445) 01:08:11.122 fused_ordering(446) 01:08:11.122 fused_ordering(447) 01:08:11.122 fused_ordering(448) 01:08:11.122 fused_ordering(449) 01:08:11.122 fused_ordering(450) 01:08:11.122 fused_ordering(451) 01:08:11.122 fused_ordering(452) 01:08:11.122 fused_ordering(453) 01:08:11.122 fused_ordering(454) 01:08:11.122 fused_ordering(455) 01:08:11.122 fused_ordering(456) 01:08:11.122 fused_ordering(457) 01:08:11.122 fused_ordering(458) 01:08:11.122 fused_ordering(459) 01:08:11.122 fused_ordering(460) 01:08:11.122 fused_ordering(461) 01:08:11.122 fused_ordering(462) 01:08:11.122 fused_ordering(463) 01:08:11.122 fused_ordering(464) 01:08:11.122 fused_ordering(465) 01:08:11.122 fused_ordering(466) 01:08:11.122 fused_ordering(467) 01:08:11.122 fused_ordering(468) 01:08:11.122 fused_ordering(469) 01:08:11.122 fused_ordering(470) 01:08:11.122 fused_ordering(471) 01:08:11.122 fused_ordering(472) 01:08:11.122 fused_ordering(473) 01:08:11.122 fused_ordering(474) 01:08:11.122 fused_ordering(475) 01:08:11.122 fused_ordering(476) 01:08:11.122 fused_ordering(477) 01:08:11.122 fused_ordering(478) 01:08:11.122 fused_ordering(479) 01:08:11.122 fused_ordering(480) 01:08:11.122 fused_ordering(481) 01:08:11.122 fused_ordering(482) 01:08:11.122 fused_ordering(483) 01:08:11.122 fused_ordering(484) 01:08:11.122 fused_ordering(485) 01:08:11.122 fused_ordering(486) 01:08:11.122 fused_ordering(487) 01:08:11.122 fused_ordering(488) 01:08:11.122 fused_ordering(489) 01:08:11.122 fused_ordering(490) 01:08:11.122 fused_ordering(491) 01:08:11.122 fused_ordering(492) 01:08:11.122 fused_ordering(493) 01:08:11.122 fused_ordering(494) 01:08:11.122 fused_ordering(495) 01:08:11.122 fused_ordering(496) 01:08:11.122 fused_ordering(497) 01:08:11.122 fused_ordering(498) 01:08:11.122 fused_ordering(499) 01:08:11.122 fused_ordering(500) 01:08:11.122 fused_ordering(501) 01:08:11.122 fused_ordering(502) 01:08:11.122 fused_ordering(503) 01:08:11.122 fused_ordering(504) 01:08:11.122 fused_ordering(505) 01:08:11.122 fused_ordering(506) 01:08:11.122 fused_ordering(507) 01:08:11.122 fused_ordering(508) 01:08:11.122 fused_ordering(509) 01:08:11.122 fused_ordering(510) 01:08:11.122 fused_ordering(511) 01:08:11.122 fused_ordering(512) 01:08:11.122 fused_ordering(513) 01:08:11.122 fused_ordering(514) 01:08:11.122 fused_ordering(515) 01:08:11.122 fused_ordering(516) 01:08:11.122 fused_ordering(517) 01:08:11.122 fused_ordering(518) 01:08:11.122 fused_ordering(519) 01:08:11.122 fused_ordering(520) 01:08:11.122 fused_ordering(521) 01:08:11.122 fused_ordering(522) 01:08:11.122 fused_ordering(523) 01:08:11.122 fused_ordering(524) 01:08:11.122 fused_ordering(525) 01:08:11.122 fused_ordering(526) 01:08:11.122 fused_ordering(527) 01:08:11.122 fused_ordering(528) 01:08:11.122 fused_ordering(529) 01:08:11.122 fused_ordering(530) 01:08:11.122 fused_ordering(531) 01:08:11.122 fused_ordering(532) 01:08:11.122 fused_ordering(533) 01:08:11.122 fused_ordering(534) 01:08:11.122 fused_ordering(535) 01:08:11.122 fused_ordering(536) 01:08:11.122 fused_ordering(537) 01:08:11.122 fused_ordering(538) 01:08:11.122 fused_ordering(539) 01:08:11.122 fused_ordering(540) 01:08:11.122 fused_ordering(541) 01:08:11.122 fused_ordering(542) 01:08:11.122 fused_ordering(543) 01:08:11.122 fused_ordering(544) 01:08:11.122 fused_ordering(545) 01:08:11.122 fused_ordering(546) 01:08:11.122 fused_ordering(547) 01:08:11.122 fused_ordering(548) 01:08:11.122 fused_ordering(549) 01:08:11.122 fused_ordering(550) 01:08:11.122 fused_ordering(551) 01:08:11.122 fused_ordering(552) 01:08:11.122 fused_ordering(553) 01:08:11.122 fused_ordering(554) 01:08:11.122 fused_ordering(555) 01:08:11.122 fused_ordering(556) 01:08:11.122 fused_ordering(557) 01:08:11.122 fused_ordering(558) 01:08:11.122 fused_ordering(559) 01:08:11.122 fused_ordering(560) 01:08:11.122 fused_ordering(561) 01:08:11.122 fused_ordering(562) 01:08:11.122 fused_ordering(563) 01:08:11.122 fused_ordering(564) 01:08:11.122 fused_ordering(565) 01:08:11.122 fused_ordering(566) 01:08:11.122 fused_ordering(567) 01:08:11.122 fused_ordering(568) 01:08:11.122 fused_ordering(569) 01:08:11.122 fused_ordering(570) 01:08:11.122 fused_ordering(571) 01:08:11.122 fused_ordering(572) 01:08:11.122 fused_ordering(573) 01:08:11.122 fused_ordering(574) 01:08:11.122 fused_ordering(575) 01:08:11.122 fused_ordering(576) 01:08:11.123 fused_ordering(577) 01:08:11.123 fused_ordering(578) 01:08:11.123 fused_ordering(579) 01:08:11.123 fused_ordering(580) 01:08:11.123 fused_ordering(581) 01:08:11.123 fused_ordering(582) 01:08:11.123 fused_ordering(583) 01:08:11.123 fused_ordering(584) 01:08:11.123 fused_ordering(585) 01:08:11.123 fused_ordering(586) 01:08:11.123 fused_ordering(587) 01:08:11.123 fused_ordering(588) 01:08:11.123 fused_ordering(589) 01:08:11.123 fused_ordering(590) 01:08:11.123 fused_ordering(591) 01:08:11.123 fused_ordering(592) 01:08:11.123 fused_ordering(593) 01:08:11.123 fused_ordering(594) 01:08:11.123 fused_ordering(595) 01:08:11.123 fused_ordering(596) 01:08:11.123 fused_ordering(597) 01:08:11.123 fused_ordering(598) 01:08:11.123 fused_ordering(599) 01:08:11.123 fused_ordering(600) 01:08:11.123 fused_ordering(601) 01:08:11.123 fused_ordering(602) 01:08:11.123 fused_ordering(603) 01:08:11.123 fused_ordering(604) 01:08:11.123 fused_ordering(605) 01:08:11.123 fused_ordering(606) 01:08:11.123 fused_ordering(607) 01:08:11.123 fused_ordering(608) 01:08:11.123 fused_ordering(609) 01:08:11.123 fused_ordering(610) 01:08:11.123 fused_ordering(611) 01:08:11.123 fused_ordering(612) 01:08:11.123 fused_ordering(613) 01:08:11.123 fused_ordering(614) 01:08:11.123 fused_ordering(615) 01:08:11.690 fused_ordering(616) 01:08:11.690 fused_ordering(617) 01:08:11.690 fused_ordering(618) 01:08:11.690 fused_ordering(619) 01:08:11.690 fused_ordering(620) 01:08:11.690 fused_ordering(621) 01:08:11.690 fused_ordering(622) 01:08:11.690 fused_ordering(623) 01:08:11.690 fused_ordering(624) 01:08:11.690 fused_ordering(625) 01:08:11.690 fused_ordering(626) 01:08:11.690 fused_ordering(627) 01:08:11.690 fused_ordering(628) 01:08:11.690 fused_ordering(629) 01:08:11.690 fused_ordering(630) 01:08:11.690 fused_ordering(631) 01:08:11.690 fused_ordering(632) 01:08:11.690 fused_ordering(633) 01:08:11.690 fused_ordering(634) 01:08:11.690 fused_ordering(635) 01:08:11.690 fused_ordering(636) 01:08:11.690 fused_ordering(637) 01:08:11.690 fused_ordering(638) 01:08:11.690 fused_ordering(639) 01:08:11.690 fused_ordering(640) 01:08:11.690 fused_ordering(641) 01:08:11.690 fused_ordering(642) 01:08:11.690 fused_ordering(643) 01:08:11.690 fused_ordering(644) 01:08:11.690 fused_ordering(645) 01:08:11.690 fused_ordering(646) 01:08:11.690 fused_ordering(647) 01:08:11.690 fused_ordering(648) 01:08:11.690 fused_ordering(649) 01:08:11.690 fused_ordering(650) 01:08:11.690 fused_ordering(651) 01:08:11.690 fused_ordering(652) 01:08:11.690 fused_ordering(653) 01:08:11.690 fused_ordering(654) 01:08:11.690 fused_ordering(655) 01:08:11.690 fused_ordering(656) 01:08:11.690 fused_ordering(657) 01:08:11.690 fused_ordering(658) 01:08:11.690 fused_ordering(659) 01:08:11.690 fused_ordering(660) 01:08:11.690 fused_ordering(661) 01:08:11.690 fused_ordering(662) 01:08:11.690 fused_ordering(663) 01:08:11.690 fused_ordering(664) 01:08:11.690 fused_ordering(665) 01:08:11.690 fused_ordering(666) 01:08:11.690 fused_ordering(667) 01:08:11.690 fused_ordering(668) 01:08:11.690 fused_ordering(669) 01:08:11.690 fused_ordering(670) 01:08:11.690 fused_ordering(671) 01:08:11.690 fused_ordering(672) 01:08:11.690 fused_ordering(673) 01:08:11.690 fused_ordering(674) 01:08:11.690 fused_ordering(675) 01:08:11.690 fused_ordering(676) 01:08:11.690 fused_ordering(677) 01:08:11.690 fused_ordering(678) 01:08:11.690 fused_ordering(679) 01:08:11.691 fused_ordering(680) 01:08:11.691 fused_ordering(681) 01:08:11.691 fused_ordering(682) 01:08:11.691 fused_ordering(683) 01:08:11.691 fused_ordering(684) 01:08:11.691 fused_ordering(685) 01:08:11.691 fused_ordering(686) 01:08:11.691 fused_ordering(687) 01:08:11.691 fused_ordering(688) 01:08:11.691 fused_ordering(689) 01:08:11.691 fused_ordering(690) 01:08:11.691 fused_ordering(691) 01:08:11.691 fused_ordering(692) 01:08:11.691 fused_ordering(693) 01:08:11.691 fused_ordering(694) 01:08:11.691 fused_ordering(695) 01:08:11.691 fused_ordering(696) 01:08:11.691 fused_ordering(697) 01:08:11.691 fused_ordering(698) 01:08:11.691 fused_ordering(699) 01:08:11.691 fused_ordering(700) 01:08:11.691 fused_ordering(701) 01:08:11.691 fused_ordering(702) 01:08:11.691 fused_ordering(703) 01:08:11.691 fused_ordering(704) 01:08:11.691 fused_ordering(705) 01:08:11.691 fused_ordering(706) 01:08:11.691 fused_ordering(707) 01:08:11.691 fused_ordering(708) 01:08:11.691 fused_ordering(709) 01:08:11.691 fused_ordering(710) 01:08:11.691 fused_ordering(711) 01:08:11.691 fused_ordering(712) 01:08:11.691 fused_ordering(713) 01:08:11.691 fused_ordering(714) 01:08:11.691 fused_ordering(715) 01:08:11.691 fused_ordering(716) 01:08:11.691 fused_ordering(717) 01:08:11.691 fused_ordering(718) 01:08:11.691 fused_ordering(719) 01:08:11.691 fused_ordering(720) 01:08:11.691 fused_ordering(721) 01:08:11.691 fused_ordering(722) 01:08:11.691 fused_ordering(723) 01:08:11.691 fused_ordering(724) 01:08:11.691 fused_ordering(725) 01:08:11.691 fused_ordering(726) 01:08:11.691 fused_ordering(727) 01:08:11.691 fused_ordering(728) 01:08:11.691 fused_ordering(729) 01:08:11.691 fused_ordering(730) 01:08:11.691 fused_ordering(731) 01:08:11.691 fused_ordering(732) 01:08:11.691 fused_ordering(733) 01:08:11.691 fused_ordering(734) 01:08:11.691 fused_ordering(735) 01:08:11.691 fused_ordering(736) 01:08:11.691 fused_ordering(737) 01:08:11.691 fused_ordering(738) 01:08:11.691 fused_ordering(739) 01:08:11.691 fused_ordering(740) 01:08:11.691 fused_ordering(741) 01:08:11.691 fused_ordering(742) 01:08:11.691 fused_ordering(743) 01:08:11.691 fused_ordering(744) 01:08:11.691 fused_ordering(745) 01:08:11.691 fused_ordering(746) 01:08:11.691 fused_ordering(747) 01:08:11.691 fused_ordering(748) 01:08:11.691 fused_ordering(749) 01:08:11.691 fused_ordering(750) 01:08:11.691 fused_ordering(751) 01:08:11.691 fused_ordering(752) 01:08:11.691 fused_ordering(753) 01:08:11.691 fused_ordering(754) 01:08:11.691 fused_ordering(755) 01:08:11.691 fused_ordering(756) 01:08:11.691 fused_ordering(757) 01:08:11.691 fused_ordering(758) 01:08:11.691 fused_ordering(759) 01:08:11.691 fused_ordering(760) 01:08:11.691 fused_ordering(761) 01:08:11.691 fused_ordering(762) 01:08:11.691 fused_ordering(763) 01:08:11.691 fused_ordering(764) 01:08:11.691 fused_ordering(765) 01:08:11.691 fused_ordering(766) 01:08:11.691 fused_ordering(767) 01:08:11.691 fused_ordering(768) 01:08:11.691 fused_ordering(769) 01:08:11.691 fused_ordering(770) 01:08:11.691 fused_ordering(771) 01:08:11.691 fused_ordering(772) 01:08:11.691 fused_ordering(773) 01:08:11.691 fused_ordering(774) 01:08:11.691 fused_ordering(775) 01:08:11.691 fused_ordering(776) 01:08:11.691 fused_ordering(777) 01:08:11.691 fused_ordering(778) 01:08:11.691 fused_ordering(779) 01:08:11.691 fused_ordering(780) 01:08:11.691 fused_ordering(781) 01:08:11.691 fused_ordering(782) 01:08:11.691 fused_ordering(783) 01:08:11.691 fused_ordering(784) 01:08:11.691 fused_ordering(785) 01:08:11.691 fused_ordering(786) 01:08:11.691 fused_ordering(787) 01:08:11.691 fused_ordering(788) 01:08:11.691 fused_ordering(789) 01:08:11.691 fused_ordering(790) 01:08:11.691 fused_ordering(791) 01:08:11.691 fused_ordering(792) 01:08:11.691 fused_ordering(793) 01:08:11.691 fused_ordering(794) 01:08:11.691 fused_ordering(795) 01:08:11.691 fused_ordering(796) 01:08:11.691 fused_ordering(797) 01:08:11.691 fused_ordering(798) 01:08:11.691 fused_ordering(799) 01:08:11.691 fused_ordering(800) 01:08:11.691 fused_ordering(801) 01:08:11.691 fused_ordering(802) 01:08:11.691 fused_ordering(803) 01:08:11.691 fused_ordering(804) 01:08:11.691 fused_ordering(805) 01:08:11.691 fused_ordering(806) 01:08:11.691 fused_ordering(807) 01:08:11.691 fused_ordering(808) 01:08:11.691 fused_ordering(809) 01:08:11.691 fused_ordering(810) 01:08:11.691 fused_ordering(811) 01:08:11.691 fused_ordering(812) 01:08:11.691 fused_ordering(813) 01:08:11.691 fused_ordering(814) 01:08:11.691 fused_ordering(815) 01:08:11.691 fused_ordering(816) 01:08:11.691 fused_ordering(817) 01:08:11.691 fused_ordering(818) 01:08:11.691 fused_ordering(819) 01:08:11.691 fused_ordering(820) 01:08:12.258 fused_ordering(821) 01:08:12.258 fused_ordering(822) 01:08:12.258 fused_ordering(823) 01:08:12.258 fused_ordering(824) 01:08:12.258 fused_ordering(825) 01:08:12.258 fused_ordering(826) 01:08:12.258 fused_ordering(827) 01:08:12.258 fused_ordering(828) 01:08:12.258 fused_ordering(829) 01:08:12.258 fused_ordering(830) 01:08:12.258 fused_ordering(831) 01:08:12.258 fused_ordering(832) 01:08:12.258 fused_ordering(833) 01:08:12.258 fused_ordering(834) 01:08:12.258 fused_ordering(835) 01:08:12.258 fused_ordering(836) 01:08:12.258 fused_ordering(837) 01:08:12.258 fused_ordering(838) 01:08:12.258 fused_ordering(839) 01:08:12.258 fused_ordering(840) 01:08:12.258 fused_ordering(841) 01:08:12.258 fused_ordering(842) 01:08:12.258 fused_ordering(843) 01:08:12.258 fused_ordering(844) 01:08:12.258 fused_ordering(845) 01:08:12.258 fused_ordering(846) 01:08:12.258 fused_ordering(847) 01:08:12.258 fused_ordering(848) 01:08:12.258 fused_ordering(849) 01:08:12.258 fused_ordering(850) 01:08:12.258 fused_ordering(851) 01:08:12.258 fused_ordering(852) 01:08:12.258 fused_ordering(853) 01:08:12.258 fused_ordering(854) 01:08:12.258 fused_ordering(855) 01:08:12.258 fused_ordering(856) 01:08:12.258 fused_ordering(857) 01:08:12.258 fused_ordering(858) 01:08:12.258 fused_ordering(859) 01:08:12.258 fused_ordering(860) 01:08:12.258 fused_ordering(861) 01:08:12.258 fused_ordering(862) 01:08:12.258 fused_ordering(863) 01:08:12.258 fused_ordering(864) 01:08:12.258 fused_ordering(865) 01:08:12.258 fused_ordering(866) 01:08:12.258 fused_ordering(867) 01:08:12.258 fused_ordering(868) 01:08:12.258 fused_ordering(869) 01:08:12.258 fused_ordering(870) 01:08:12.258 fused_ordering(871) 01:08:12.258 fused_ordering(872) 01:08:12.258 fused_ordering(873) 01:08:12.258 fused_ordering(874) 01:08:12.258 fused_ordering(875) 01:08:12.258 fused_ordering(876) 01:08:12.258 fused_ordering(877) 01:08:12.258 fused_ordering(878) 01:08:12.258 fused_ordering(879) 01:08:12.258 fused_ordering(880) 01:08:12.258 fused_ordering(881) 01:08:12.258 fused_ordering(882) 01:08:12.258 fused_ordering(883) 01:08:12.258 fused_ordering(884) 01:08:12.258 fused_ordering(885) 01:08:12.258 fused_ordering(886) 01:08:12.258 fused_ordering(887) 01:08:12.258 fused_ordering(888) 01:08:12.258 fused_ordering(889) 01:08:12.258 fused_ordering(890) 01:08:12.258 fused_ordering(891) 01:08:12.258 fused_ordering(892) 01:08:12.258 fused_ordering(893) 01:08:12.258 fused_ordering(894) 01:08:12.258 fused_ordering(895) 01:08:12.258 fused_ordering(896) 01:08:12.258 fused_ordering(897) 01:08:12.258 fused_ordering(898) 01:08:12.258 fused_ordering(899) 01:08:12.258 fused_ordering(900) 01:08:12.258 fused_ordering(901) 01:08:12.258 fused_ordering(902) 01:08:12.258 fused_ordering(903) 01:08:12.258 fused_ordering(904) 01:08:12.258 fused_ordering(905) 01:08:12.258 fused_ordering(906) 01:08:12.258 fused_ordering(907) 01:08:12.258 fused_ordering(908) 01:08:12.258 fused_ordering(909) 01:08:12.258 fused_ordering(910) 01:08:12.258 fused_ordering(911) 01:08:12.258 fused_ordering(912) 01:08:12.258 fused_ordering(913) 01:08:12.258 fused_ordering(914) 01:08:12.258 fused_ordering(915) 01:08:12.258 fused_ordering(916) 01:08:12.258 fused_ordering(917) 01:08:12.258 fused_ordering(918) 01:08:12.258 fused_ordering(919) 01:08:12.258 fused_ordering(920) 01:08:12.258 fused_ordering(921) 01:08:12.258 fused_ordering(922) 01:08:12.258 fused_ordering(923) 01:08:12.258 fused_ordering(924) 01:08:12.258 fused_ordering(925) 01:08:12.258 fused_ordering(926) 01:08:12.258 fused_ordering(927) 01:08:12.258 fused_ordering(928) 01:08:12.258 fused_ordering(929) 01:08:12.258 fused_ordering(930) 01:08:12.258 fused_ordering(931) 01:08:12.258 fused_ordering(932) 01:08:12.258 fused_ordering(933) 01:08:12.258 fused_ordering(934) 01:08:12.258 fused_ordering(935) 01:08:12.258 fused_ordering(936) 01:08:12.258 fused_ordering(937) 01:08:12.258 fused_ordering(938) 01:08:12.258 fused_ordering(939) 01:08:12.258 fused_ordering(940) 01:08:12.258 fused_ordering(941) 01:08:12.258 fused_ordering(942) 01:08:12.258 fused_ordering(943) 01:08:12.258 fused_ordering(944) 01:08:12.258 fused_ordering(945) 01:08:12.258 fused_ordering(946) 01:08:12.258 fused_ordering(947) 01:08:12.258 fused_ordering(948) 01:08:12.258 fused_ordering(949) 01:08:12.258 fused_ordering(950) 01:08:12.258 fused_ordering(951) 01:08:12.258 fused_ordering(952) 01:08:12.258 fused_ordering(953) 01:08:12.258 fused_ordering(954) 01:08:12.258 fused_ordering(955) 01:08:12.258 fused_ordering(956) 01:08:12.258 fused_ordering(957) 01:08:12.258 fused_ordering(958) 01:08:12.258 fused_ordering(959) 01:08:12.258 fused_ordering(960) 01:08:12.258 fused_ordering(961) 01:08:12.258 fused_ordering(962) 01:08:12.258 fused_ordering(963) 01:08:12.258 fused_ordering(964) 01:08:12.258 fused_ordering(965) 01:08:12.258 fused_ordering(966) 01:08:12.258 fused_ordering(967) 01:08:12.258 fused_ordering(968) 01:08:12.258 fused_ordering(969) 01:08:12.258 fused_ordering(970) 01:08:12.258 fused_ordering(971) 01:08:12.258 fused_ordering(972) 01:08:12.258 fused_ordering(973) 01:08:12.258 fused_ordering(974) 01:08:12.258 fused_ordering(975) 01:08:12.258 fused_ordering(976) 01:08:12.258 fused_ordering(977) 01:08:12.258 fused_ordering(978) 01:08:12.258 fused_ordering(979) 01:08:12.258 fused_ordering(980) 01:08:12.258 fused_ordering(981) 01:08:12.258 fused_ordering(982) 01:08:12.258 fused_ordering(983) 01:08:12.258 fused_ordering(984) 01:08:12.258 fused_ordering(985) 01:08:12.258 fused_ordering(986) 01:08:12.258 fused_ordering(987) 01:08:12.258 fused_ordering(988) 01:08:12.258 fused_ordering(989) 01:08:12.258 fused_ordering(990) 01:08:12.258 fused_ordering(991) 01:08:12.258 fused_ordering(992) 01:08:12.258 fused_ordering(993) 01:08:12.258 fused_ordering(994) 01:08:12.258 fused_ordering(995) 01:08:12.258 fused_ordering(996) 01:08:12.258 fused_ordering(997) 01:08:12.258 fused_ordering(998) 01:08:12.258 fused_ordering(999) 01:08:12.258 fused_ordering(1000) 01:08:12.258 fused_ordering(1001) 01:08:12.258 fused_ordering(1002) 01:08:12.258 fused_ordering(1003) 01:08:12.258 fused_ordering(1004) 01:08:12.258 fused_ordering(1005) 01:08:12.258 fused_ordering(1006) 01:08:12.258 fused_ordering(1007) 01:08:12.258 fused_ordering(1008) 01:08:12.258 fused_ordering(1009) 01:08:12.258 fused_ordering(1010) 01:08:12.258 fused_ordering(1011) 01:08:12.258 fused_ordering(1012) 01:08:12.258 fused_ordering(1013) 01:08:12.258 fused_ordering(1014) 01:08:12.258 fused_ordering(1015) 01:08:12.258 fused_ordering(1016) 01:08:12.258 fused_ordering(1017) 01:08:12.258 fused_ordering(1018) 01:08:12.258 fused_ordering(1019) 01:08:12.258 fused_ordering(1020) 01:08:12.258 fused_ordering(1021) 01:08:12.258 fused_ordering(1022) 01:08:12.258 fused_ordering(1023) 01:08:12.258 11:05:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 01:08:12.258 11:05:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 01:08:12.258 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:12.258 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 01:08:12.517 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:12.517 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:12.518 rmmod nvme_tcp 01:08:12.518 rmmod nvme_fabrics 01:08:12.518 rmmod nvme_keyring 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 87511 ']' 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 87511 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 87511 ']' 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 87511 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87511 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:08:12.518 killing process with pid 87511 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87511' 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 87511 01:08:12.518 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 87511 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:12.776 01:08:12.776 real 0m4.553s 01:08:12.776 user 0m5.301s 01:08:12.776 sys 0m1.674s 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:12.776 ************************************ 01:08:12.776 END TEST nvmf_fused_ordering 01:08:12.776 11:05:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:08:12.776 ************************************ 01:08:12.776 11:05:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:12.776 11:05:17 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 01:08:12.776 11:05:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:12.776 11:05:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:12.776 11:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:12.776 ************************************ 01:08:12.776 START TEST nvmf_delete_subsystem 01:08:12.776 ************************************ 01:08:12.776 11:05:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 01:08:13.035 * Looking for test storage... 01:08:13.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:13.035 Cannot find device "nvmf_tgt_br" 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:13.035 Cannot find device "nvmf_tgt_br2" 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:13.035 Cannot find device "nvmf_tgt_br" 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:13.035 Cannot find device "nvmf_tgt_br2" 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:13.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:13.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:13.035 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:13.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:13.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 01:08:13.294 01:08:13.294 --- 10.0.0.2 ping statistics --- 01:08:13.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:13.294 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:13.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:13.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 01:08:13.294 01:08:13.294 --- 10.0.0.3 ping statistics --- 01:08:13.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:13.294 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:13.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:13.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:08:13.294 01:08:13.294 --- 10.0.0.1 ping statistics --- 01:08:13.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:13.294 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=87776 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 87776 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 87776 ']' 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:13.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:13.294 11:05:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:13.294 [2024-07-22 11:05:18.491241] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:13.294 [2024-07-22 11:05:18.491320] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:13.553 [2024-07-22 11:05:18.638896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:08:13.553 [2024-07-22 11:05:18.726944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:13.553 [2024-07-22 11:05:18.727247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:13.554 [2024-07-22 11:05:18.727407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:13.554 [2024-07-22 11:05:18.727634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:13.554 [2024-07-22 11:05:18.727807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:13.554 [2024-07-22 11:05:18.728118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:08:13.554 [2024-07-22 11:05:18.728130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:14.490 [2024-07-22 11:05:19.599376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:14.490 [2024-07-22 11:05:19.616345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:14.490 NULL1 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:14.490 Delay0 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=87828 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 01:08:14.490 11:05:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 01:08:14.749 [2024-07-22 11:05:19.810386] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:08:16.691 11:05:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:08:16.691 11:05:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:16.691 11:05:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 starting I/O failed: -6 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 [2024-07-22 11:05:21.863832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7d70000c00 is same with the state(5) to be set 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.691 Read completed with error (sct=0, sc=8) 01:08:16.691 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 starting I/O failed: -6 01:08:16.692 [2024-07-22 11:05:21.865942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab2c0 is same with the state(5) to be set 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:16.692 Write completed with error (sct=0, sc=8) 01:08:16.692 Read completed with error (sct=0, sc=8) 01:08:17.629 [2024-07-22 11:05:22.826545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9ae0 is same with the state(5) to be set 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 [2024-07-22 11:05:22.864312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ab5e0 is same with the state(5) to be set 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 [2024-07-22 11:05:22.864892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aafa0 is same with the state(5) to be set 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 [2024-07-22 11:05:22.865561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7d7000cff0 is same with the state(5) to be set 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 Write completed with error (sct=0, sc=8) 01:08:17.893 Read completed with error (sct=0, sc=8) 01:08:17.893 [2024-07-22 11:05:22.865736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7d7000d770 is same with the state(5) to be set 01:08:17.893 Initializing NVMe Controllers 01:08:17.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:08:17.893 Controller IO queue size 128, less than required. 01:08:17.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:08:17.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:08:17.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:08:17.893 Initialization complete. Launching workers. 01:08:17.893 ======================================================== 01:08:17.893 Latency(us) 01:08:17.893 Device Information : IOPS MiB/s Average min max 01:08:17.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.00 0.08 922777.90 1318.94 2002862.02 01:08:17.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.95 0.08 895155.68 401.81 1020796.55 01:08:17.893 ======================================================== 01:08:17.893 Total : 335.96 0.16 908804.30 401.81 2002862.02 01:08:17.893 01:08:17.894 [2024-07-22 11:05:22.867174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a9ae0 (9): Bad file descriptor 01:08:17.894 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 01:08:17.894 11:05:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:17.894 11:05:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 01:08:17.894 11:05:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87828 01:08:17.894 11:05:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87828 01:08:18.468 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (87828) - No such process 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 87828 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 87828 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 87828 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:18.468 [2024-07-22 11:05:23.394876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=87872 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:18.468 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:08:18.468 [2024-07-22 11:05:23.583186] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:08:18.726 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:08:18.726 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:18.726 11:05:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:08:19.292 11:05:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:08:19.292 11:05:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:19.292 11:05:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:08:19.857 11:05:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:08:19.857 11:05:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:19.857 11:05:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:08:20.422 11:05:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:08:20.422 11:05:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:20.422 11:05:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:08:20.989 11:05:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:08:20.989 11:05:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:20.989 11:05:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:08:21.248 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:08:21.248 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:21.248 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:08:21.507 Initializing NVMe Controllers 01:08:21.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:08:21.507 Controller IO queue size 128, less than required. 01:08:21.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:08:21.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:08:21.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:08:21.507 Initialization complete. Launching workers. 01:08:21.507 ======================================================== 01:08:21.507 Latency(us) 01:08:21.507 Device Information : IOPS MiB/s Average min max 01:08:21.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005250.37 1000166.16 1019361.02 01:08:21.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1008205.03 1000255.12 1023707.57 01:08:21.507 ======================================================== 01:08:21.507 Total : 256.00 0.12 1006727.70 1000166.16 1023707.57 01:08:21.507 01:08:21.765 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:08:21.765 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87872 01:08:21.765 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (87872) - No such process 01:08:21.765 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 87872 01:08:21.765 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:08:21.765 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 01:08:21.765 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:21.765 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 01:08:22.024 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:22.024 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 01:08:22.024 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:22.024 11:05:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:22.024 rmmod nvme_tcp 01:08:22.024 rmmod nvme_fabrics 01:08:22.024 rmmod nvme_keyring 01:08:22.024 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 87776 ']' 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 87776 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 87776 ']' 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 87776 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87776 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:08:22.025 killing process with pid 87776 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87776' 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 87776 01:08:22.025 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 87776 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:22.284 01:08:22.284 real 0m9.376s 01:08:22.284 user 0m29.299s 01:08:22.284 sys 0m1.198s 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:22.284 11:05:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:08:22.284 ************************************ 01:08:22.284 END TEST nvmf_delete_subsystem 01:08:22.284 ************************************ 01:08:22.284 11:05:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:22.284 11:05:27 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 01:08:22.284 11:05:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:22.284 11:05:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:22.284 11:05:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:22.284 ************************************ 01:08:22.284 START TEST nvmf_ns_masking 01:08:22.284 ************************************ 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 01:08:22.284 * Looking for test storage... 01:08:22.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:22.284 11:05:27 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 01:08:22.285 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 01:08:22.543 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=50f53032-1d47-459f-bb28-baa60ad6b69e 01:08:22.543 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 01:08:22.543 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ef9a3a23-5daf-4eaa-bc26-6096b2c0fca6 01:08:22.543 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 01:08:22.543 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 01:08:22.543 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 01:08:22.543 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7ba5feac-46f7-4ee2-8276-24cd7e08e9ca 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:22.544 Cannot find device "nvmf_tgt_br" 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:22.544 Cannot find device "nvmf_tgt_br2" 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:22.544 Cannot find device "nvmf_tgt_br" 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:22.544 Cannot find device "nvmf_tgt_br2" 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:22.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:22.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:22.544 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:22.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:22.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 01:08:22.804 01:08:22.804 --- 10.0.0.2 ping statistics --- 01:08:22.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:22.804 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:22.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:22.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 01:08:22.804 01:08:22.804 --- 10.0.0.3 ping statistics --- 01:08:22.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:22.804 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:22.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:22.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 01:08:22.804 01:08:22.804 --- 10.0.0.1 ping statistics --- 01:08:22.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:22.804 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:22.804 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=88118 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 88118 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 88118 ']' 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:22.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:22.805 11:05:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:08:22.805 [2024-07-22 11:05:27.929396] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:22.805 [2024-07-22 11:05:27.929477] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:23.063 [2024-07-22 11:05:28.074421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:23.063 [2024-07-22 11:05:28.157018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:23.063 [2024-07-22 11:05:28.157070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:23.063 [2024-07-22 11:05:28.157083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:23.063 [2024-07-22 11:05:28.157094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:23.063 [2024-07-22 11:05:28.157114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:23.063 [2024-07-22 11:05:28.157143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:08:23.997 11:05:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:23.997 11:05:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 01:08:23.997 11:05:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:23.997 11:05:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:23.997 11:05:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:08:23.997 11:05:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:23.997 11:05:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:08:24.254 [2024-07-22 11:05:29.255363] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:24.254 11:05:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 01:08:24.254 11:05:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 01:08:24.254 11:05:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:08:24.512 Malloc1 01:08:24.512 11:05:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 01:08:24.770 Malloc2 01:08:24.770 11:05:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:08:25.027 11:05:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 01:08:25.593 11:05:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:08:25.851 [2024-07-22 11:05:30.859899] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:25.851 11:05:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 01:08:25.851 11:05:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7ba5feac-46f7-4ee2-8276-24cd7e08e9ca -a 10.0.0.2 -s 4420 -i 4 01:08:25.851 11:05:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 01:08:25.851 11:05:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 01:08:25.851 11:05:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:08:25.851 11:05:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:08:25.851 11:05:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:28.376 [ 0]:0x1 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fff55f0cf5f749d0bc2ddd914e29ea31 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fff55f0cf5f749d0bc2ddd914e29ea31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:28.376 [ 0]:0x1 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fff55f0cf5f749d0bc2ddd914e29ea31 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fff55f0cf5f749d0bc2ddd914e29ea31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:08:28.376 [ 1]:0x2 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5081c93dbd2450db2347e976c12f379 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5081c93dbd2450db2347e976c12f379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 01:08:28.376 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:08:28.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:08:28.660 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:08:28.930 11:05:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7ba5feac-46f7-4ee2-8276-24cd7e08e9ca -a 10.0.0.2 -s 4420 -i 4 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 01:08:29.205 11:05:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:08:31.730 [ 0]:0x2 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5081c93dbd2450db2347e976c12f379 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5081c93dbd2450db2347e976c12f379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:31.730 [ 0]:0x1 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fff55f0cf5f749d0bc2ddd914e29ea31 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fff55f0cf5f749d0bc2ddd914e29ea31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:31.730 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:08:31.730 [ 1]:0x2 01:08:31.731 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:08:31.731 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:31.994 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5081c93dbd2450db2347e976c12f379 01:08:31.994 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5081c93dbd2450db2347e976c12f379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:31.994 11:05:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:31.994 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:08:32.255 [ 0]:0x2 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5081c93dbd2450db2347e976c12f379 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5081c93dbd2450db2347e976c12f379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:08:32.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:08:32.255 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:08:32.513 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 01:08:32.513 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7ba5feac-46f7-4ee2-8276-24cd7e08e9ca -a 10.0.0.2 -s 4420 -i 4 01:08:32.797 11:05:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 01:08:32.797 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 01:08:32.797 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:08:32.797 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 01:08:32.797 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 01:08:32.797 11:05:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:34.697 [ 0]:0x1 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fff55f0cf5f749d0bc2ddd914e29ea31 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fff55f0cf5f749d0bc2ddd914e29ea31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:08:34.697 [ 1]:0x2 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:08:34.697 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:34.955 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5081c93dbd2450db2347e976c12f379 01:08:34.955 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5081c93dbd2450db2347e976c12f379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:34.955 11:05:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:08:35.213 [ 0]:0x2 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5081c93dbd2450db2347e976c12f379 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5081c93dbd2450db2347e976c12f379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:08:35.213 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 01:08:35.472 [2024-07-22 11:05:40.653145] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 01:08:35.472 2024/07/22 11:05:40 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 01:08:35.472 request: 01:08:35.472 { 01:08:35.472 "method": "nvmf_ns_remove_host", 01:08:35.472 "params": { 01:08:35.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:35.472 "nsid": 2, 01:08:35.472 "host": "nqn.2016-06.io.spdk:host1" 01:08:35.472 } 01:08:35.472 } 01:08:35.472 Got JSON-RPC error response 01:08:35.472 GoRPCClient: error on JSON-RPC call 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:35.472 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:08:35.731 [ 0]:0x2 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5081c93dbd2450db2347e976c12f379 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5081c93dbd2450db2347e976c12f379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:08:35.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=88505 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 88505 /var/tmp/host.sock 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 88505 ']' 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:35.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:35.731 11:05:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:08:35.731 [2024-07-22 11:05:40.898286] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:35.731 [2024-07-22 11:05:40.898403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88505 ] 01:08:35.989 [2024-07-22 11:05:41.038165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:35.989 [2024-07-22 11:05:41.127014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:08:36.924 11:05:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:36.924 11:05:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 01:08:36.924 11:05:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:08:37.182 11:05:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:08:37.440 11:05:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 50f53032-1d47-459f-bb28-baa60ad6b69e 01:08:37.440 11:05:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 01:08:37.440 11:05:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 50F530321D47459FBB28BAA60AD6B69E -i 01:08:37.699 11:05:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ef9a3a23-5daf-4eaa-bc26-6096b2c0fca6 01:08:37.699 11:05:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 01:08:37.699 11:05:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EF9A3A235DAF4EAABC266096B2C0FCA6 -i 01:08:37.960 11:05:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:08:38.220 11:05:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 01:08:38.479 11:05:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 01:08:38.479 11:05:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 01:08:38.737 nvme0n1 01:08:38.737 11:05:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 01:08:38.737 11:05:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 01:08:39.303 nvme1n2 01:08:39.303 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 01:08:39.303 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 01:08:39.303 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 01:08:39.303 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 01:08:39.303 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 01:08:39.560 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 01:08:39.560 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 01:08:39.560 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 01:08:39.560 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 01:08:39.818 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 50f53032-1d47-459f-bb28-baa60ad6b69e == \5\0\f\5\3\0\3\2\-\1\d\4\7\-\4\5\9\f\-\b\b\2\8\-\b\a\a\6\0\a\d\6\b\6\9\e ]] 01:08:39.818 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 01:08:39.818 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 01:08:39.818 11:05:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ef9a3a23-5daf-4eaa-bc26-6096b2c0fca6 == \e\f\9\a\3\a\2\3\-\5\d\a\f\-\4\e\a\a\-\b\c\2\6\-\6\0\9\6\b\2\c\0\f\c\a\6 ]] 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 88505 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 88505 ']' 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 88505 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88505 01:08:40.075 killing process with pid 88505 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88505' 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 88505 01:08:40.075 11:05:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 88505 01:08:40.678 11:05:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:08:40.937 11:05:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 01:08:40.937 11:05:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 01:08:40.937 11:05:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:40.937 11:05:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:40.937 rmmod nvme_tcp 01:08:40.937 rmmod nvme_fabrics 01:08:40.937 rmmod nvme_keyring 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 88118 ']' 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 88118 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 88118 ']' 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 88118 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88118 01:08:40.937 killing process with pid 88118 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88118' 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 88118 01:08:40.937 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 88118 01:08:41.194 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:41.194 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:41.195 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:41.195 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:41.195 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:41.453 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:41.453 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:41.453 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:41.453 11:05:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:41.453 01:08:41.453 real 0m19.094s 01:08:41.453 user 0m30.535s 01:08:41.453 sys 0m3.073s 01:08:41.453 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:41.453 11:05:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:08:41.453 ************************************ 01:08:41.453 END TEST nvmf_ns_masking 01:08:41.453 ************************************ 01:08:41.453 11:05:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:41.453 11:05:46 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 01:08:41.453 11:05:46 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 01:08:41.453 11:05:46 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:08:41.453 11:05:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:41.453 11:05:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:41.453 11:05:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:41.453 ************************************ 01:08:41.453 START TEST nvmf_host_management 01:08:41.453 ************************************ 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:08:41.453 * Looking for test storage... 01:08:41.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:41.453 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:41.454 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:41.711 Cannot find device "nvmf_tgt_br" 01:08:41.711 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 01:08:41.711 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:41.711 Cannot find device "nvmf_tgt_br2" 01:08:41.711 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 01:08:41.711 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:41.712 Cannot find device "nvmf_tgt_br" 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:41.712 Cannot find device "nvmf_tgt_br2" 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:41.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:41.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:41.712 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:41.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:41.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 01:08:41.970 01:08:41.970 --- 10.0.0.2 ping statistics --- 01:08:41.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:41.970 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:41.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:41.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 01:08:41.970 01:08:41.970 --- 10.0.0.3 ping statistics --- 01:08:41.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:41.970 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:41.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:41.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 01:08:41.970 01:08:41.970 --- 10.0.0.1 ping statistics --- 01:08:41.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:41.970 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:08:41.970 11:05:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=88874 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 88874 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 88874 ']' 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:41.970 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:41.971 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:41.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:41.971 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:41.971 11:05:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:41.971 [2024-07-22 11:05:47.101850] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:41.971 [2024-07-22 11:05:47.102030] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:42.229 [2024-07-22 11:05:47.246581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:08:42.229 [2024-07-22 11:05:47.338633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:42.229 [2024-07-22 11:05:47.338673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:42.229 [2024-07-22 11:05:47.338684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:42.229 [2024-07-22 11:05:47.338693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:42.229 [2024-07-22 11:05:47.338700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:42.229 [2024-07-22 11:05:47.338790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:08:42.229 [2024-07-22 11:05:47.339087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:08:42.229 [2024-07-22 11:05:47.340810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:08:42.229 [2024-07-22 11:05:47.340811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:43.162 [2024-07-22 11:05:48.170459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:43.162 Malloc0 01:08:43.162 [2024-07-22 11:05:48.243702] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=88947 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 88947 /var/tmp/bdevperf.sock 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 88947 ']' 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:43.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:08:43.162 { 01:08:43.162 "params": { 01:08:43.162 "name": "Nvme$subsystem", 01:08:43.162 "trtype": "$TEST_TRANSPORT", 01:08:43.162 "traddr": "$NVMF_FIRST_TARGET_IP", 01:08:43.162 "adrfam": "ipv4", 01:08:43.162 "trsvcid": "$NVMF_PORT", 01:08:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:08:43.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:08:43.162 "hdgst": ${hdgst:-false}, 01:08:43.162 "ddgst": ${ddgst:-false} 01:08:43.162 }, 01:08:43.162 "method": "bdev_nvme_attach_controller" 01:08:43.162 } 01:08:43.162 EOF 01:08:43.162 )") 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 01:08:43.162 11:05:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:08:43.163 "params": { 01:08:43.163 "name": "Nvme0", 01:08:43.163 "trtype": "tcp", 01:08:43.163 "traddr": "10.0.0.2", 01:08:43.163 "adrfam": "ipv4", 01:08:43.163 "trsvcid": "4420", 01:08:43.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:08:43.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:08:43.163 "hdgst": false, 01:08:43.163 "ddgst": false 01:08:43.163 }, 01:08:43.163 "method": "bdev_nvme_attach_controller" 01:08:43.163 }' 01:08:43.163 [2024-07-22 11:05:48.342188] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:43.163 [2024-07-22 11:05:48.342251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88947 ] 01:08:43.420 [2024-07-22 11:05:48.486128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:43.420 [2024-07-22 11:05:48.559333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:08:43.678 Running I/O for 10 seconds... 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:08:44.245 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:44.506 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:44.506 [2024-07-22 11:05:49.460922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.460975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.460991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.506 [2024-07-22 11:05:49.461228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7820 is same with the state(5) to be set 01:08:44.507 [2024-07-22 11:05:49.461742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.461979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.461990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.507 [2024-07-22 11:05:49.462281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.507 [2024-07-22 11:05:49.462290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.462983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.462995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.463015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.463036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.463067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.463088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.463109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.463131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.508 [2024-07-22 11:05:49.463152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.508 [2024-07-22 11:05:49.463161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.509 [2024-07-22 11:05:49.463182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.509 [2024-07-22 11:05:49.463203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.509 [2024-07-22 11:05:49.463224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.509 [2024-07-22 11:05:49.463244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.509 [2024-07-22 11:05:49.463270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.509 [2024-07-22 11:05:49.463291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:44.509 [2024-07-22 11:05:49.463311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:44.509 [2024-07-22 11:05:49.463322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268e5a0 is same with the state(5) to be set 01:08:44.509 [2024-07-22 11:05:49.463388] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x268e5a0 was disconnected and freed. reset controller. 01:08:44.509 [2024-07-22 11:05:49.464641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:08:44.509 task offset: 122880 on job bdev=Nvme0n1 fails 01:08:44.509 01:08:44.509 Latency(us) 01:08:44.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:44.509 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:08:44.509 Job: Nvme0n1 ended in about 0.72 seconds with error 01:08:44.509 Verification LBA range: start 0x0 length 0x400 01:08:44.509 Nvme0n1 : 0.72 1341.23 83.83 89.42 0.00 43490.05 6345.08 43134.60 01:08:44.509 =================================================================================================================== 01:08:44.509 Total : 1341.23 83.83 89.42 0.00 43490.05 6345.08 43134.60 01:08:44.509 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:44.509 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:08:44.509 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 01:08:44.509 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:44.509 [2024-07-22 11:05:49.466625] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:08:44.509 [2024-07-22 11:05:49.466650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21edcb0 (9): Bad file descriptor 01:08:44.509 11:05:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:08:44.509 11:05:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:08:44.509 [2024-07-22 11:05:49.476198] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 88947 01:08:45.444 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (88947) - No such process 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:08:45.444 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:08:45.444 { 01:08:45.444 "params": { 01:08:45.444 "name": "Nvme$subsystem", 01:08:45.444 "trtype": "$TEST_TRANSPORT", 01:08:45.444 "traddr": "$NVMF_FIRST_TARGET_IP", 01:08:45.444 "adrfam": "ipv4", 01:08:45.444 "trsvcid": "$NVMF_PORT", 01:08:45.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:08:45.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:08:45.444 "hdgst": ${hdgst:-false}, 01:08:45.444 "ddgst": ${ddgst:-false} 01:08:45.444 }, 01:08:45.444 "method": "bdev_nvme_attach_controller" 01:08:45.444 } 01:08:45.444 EOF 01:08:45.444 )") 01:08:45.445 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 01:08:45.445 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 01:08:45.445 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 01:08:45.445 11:05:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:08:45.445 "params": { 01:08:45.445 "name": "Nvme0", 01:08:45.445 "trtype": "tcp", 01:08:45.445 "traddr": "10.0.0.2", 01:08:45.445 "adrfam": "ipv4", 01:08:45.445 "trsvcid": "4420", 01:08:45.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:08:45.445 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:08:45.445 "hdgst": false, 01:08:45.445 "ddgst": false 01:08:45.445 }, 01:08:45.445 "method": "bdev_nvme_attach_controller" 01:08:45.445 }' 01:08:45.445 [2024-07-22 11:05:50.537026] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:45.445 [2024-07-22 11:05:50.537108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88996 ] 01:08:45.703 [2024-07-22 11:05:50.682633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:45.703 [2024-07-22 11:05:50.753044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:08:45.962 Running I/O for 1 seconds... 01:08:46.897 01:08:46.897 Latency(us) 01:08:46.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:46.897 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:08:46.897 Verification LBA range: start 0x0 length 0x400 01:08:46.897 Nvme0n1 : 1.03 1491.47 93.22 0.00 0.00 42058.35 5719.51 39083.29 01:08:46.897 =================================================================================================================== 01:08:46.897 Total : 1491.47 93.22 0.00 0.00 42058.35 5719.51 39083.29 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:08:47.156 rmmod nvme_tcp 01:08:47.156 rmmod nvme_fabrics 01:08:47.156 rmmod nvme_keyring 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 88874 ']' 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 88874 01:08:47.156 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 88874 ']' 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 88874 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88874 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88874' 01:08:47.157 killing process with pid 88874 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 88874 01:08:47.157 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 88874 01:08:47.415 [2024-07-22 11:05:52.569048] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:47.415 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:47.674 11:05:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:08:47.674 11:05:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:08:47.674 01:08:47.674 real 0m6.111s 01:08:47.674 user 0m23.811s 01:08:47.674 sys 0m1.518s 01:08:47.674 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 01:08:47.674 ************************************ 01:08:47.674 END TEST nvmf_host_management 01:08:47.674 ************************************ 01:08:47.674 11:05:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:08:47.674 11:05:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:08:47.674 11:05:52 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:08:47.674 11:05:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:08:47.674 11:05:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:08:47.674 11:05:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:47.674 ************************************ 01:08:47.674 START TEST nvmf_lvol 01:08:47.674 ************************************ 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:08:47.674 * Looking for test storage... 01:08:47.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:47.674 11:05:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:08:47.675 Cannot find device "nvmf_tgt_br" 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:08:47.675 Cannot find device "nvmf_tgt_br2" 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:08:47.675 Cannot find device "nvmf_tgt_br" 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:08:47.675 Cannot find device "nvmf_tgt_br2" 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:08:47.675 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:08:47.933 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:47.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:47.933 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:08:47.933 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:47.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:47.933 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:08:47.933 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:08:47.933 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:47.933 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:47.934 11:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:08:47.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:47.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 01:08:47.934 01:08:47.934 --- 10.0.0.2 ping statistics --- 01:08:47.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:47.934 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:08:47.934 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:47.934 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 01:08:47.934 01:08:47.934 --- 10.0.0.3 ping statistics --- 01:08:47.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:47.934 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:47.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:47.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:08:47.934 01:08:47.934 --- 10.0.0.1 ping statistics --- 01:08:47.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:47.934 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=89213 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 89213 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 89213 ']' 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:47.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 01:08:47.934 11:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:08:48.192 [2024-07-22 11:05:53.172537] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:08:48.192 [2024-07-22 11:05:53.172658] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:48.192 [2024-07-22 11:05:53.319689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:08:48.450 [2024-07-22 11:05:53.406285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:48.450 [2024-07-22 11:05:53.406352] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:48.450 [2024-07-22 11:05:53.406374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:48.450 [2024-07-22 11:05:53.406392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:48.450 [2024-07-22 11:05:53.406406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:48.450 [2024-07-22 11:05:53.406696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:08:48.450 [2024-07-22 11:05:53.407126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:08:48.450 [2024-07-22 11:05:53.407138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:08:49.015 11:05:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:08:49.015 11:05:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 01:08:49.015 11:05:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:08:49.015 11:05:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 01:08:49.015 11:05:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:08:49.015 11:05:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:49.015 11:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:08:49.276 [2024-07-22 11:05:54.445212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:49.276 11:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:49.842 11:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:08:49.842 11:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:49.842 11:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:08:49.842 11:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:08:50.099 11:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:08:50.665 11:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=21b829aa-05ca-4f99-a13b-a4253c3d655b 01:08:50.665 11:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 21b829aa-05ca-4f99-a13b-a4253c3d655b lvol 20 01:08:50.665 11:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c0809980-9ad1-438e-bd79-92db2bae8078 01:08:50.665 11:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:08:50.923 11:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c0809980-9ad1-438e-bd79-92db2bae8078 01:08:51.488 11:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:08:51.488 [2024-07-22 11:05:56.644003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:08:51.488 11:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:08:52.052 11:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=89355 01:08:52.052 11:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:08:52.052 11:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:08:52.985 11:05:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c0809980-9ad1-438e-bd79-92db2bae8078 MY_SNAPSHOT 01:08:53.242 11:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c9fe3559-e820-4edc-915e-d796c8980851 01:08:53.242 11:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c0809980-9ad1-438e-bd79-92db2bae8078 30 01:08:53.500 11:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone c9fe3559-e820-4edc-915e-d796c8980851 MY_CLONE 01:08:54.064 11:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1f9cff8b-230e-4c72-9584-de26111aea6b 01:08:54.064 11:05:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 1f9cff8b-230e-4c72-9584-de26111aea6b 01:08:54.629 11:05:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 89355 01:09:02.751 Initializing NVMe Controllers 01:09:02.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 01:09:02.751 Controller IO queue size 128, less than required. 01:09:02.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:02.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:09:02.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:09:02.751 Initialization complete. Launching workers. 01:09:02.751 ======================================================== 01:09:02.751 Latency(us) 01:09:02.751 Device Information : IOPS MiB/s Average min max 01:09:02.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9677.97 37.80 13237.40 2194.06 198115.82 01:09:02.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9016.07 35.22 14205.46 3370.72 72707.00 01:09:02.751 ======================================================== 01:09:02.751 Total : 18694.04 73.02 13704.29 2194.06 198115.82 01:09:02.751 01:09:02.751 11:06:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:09:02.751 11:06:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c0809980-9ad1-438e-bd79-92db2bae8078 01:09:02.751 11:06:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21b829aa-05ca-4f99-a13b-a4253c3d655b 01:09:03.009 11:06:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:09:03.009 11:06:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:09:03.009 11:06:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:09:03.009 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 01:09:03.009 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:09:03.266 rmmod nvme_tcp 01:09:03.266 rmmod nvme_fabrics 01:09:03.266 rmmod nvme_keyring 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 89213 ']' 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 89213 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 89213 ']' 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 89213 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89213 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89213' 01:09:03.266 killing process with pid 89213 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 89213 01:09:03.266 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 89213 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:09:03.523 01:09:03.523 real 0m16.034s 01:09:03.523 user 1m6.946s 01:09:03.523 sys 0m3.855s 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:03.523 ************************************ 01:09:03.523 END TEST nvmf_lvol 01:09:03.523 11:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:09:03.523 ************************************ 01:09:03.780 11:06:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:09:03.780 11:06:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:09:03.780 11:06:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:09:03.780 11:06:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:03.780 11:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:03.780 ************************************ 01:09:03.780 START TEST nvmf_lvs_grow 01:09:03.780 ************************************ 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:09:03.780 * Looking for test storage... 01:09:03.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:09:03.780 Cannot find device "nvmf_tgt_br" 01:09:03.780 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:09:03.781 Cannot find device "nvmf_tgt_br2" 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:09:03.781 Cannot find device "nvmf_tgt_br" 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:09:03.781 Cannot find device "nvmf_tgt_br2" 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 01:09:03.781 11:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:04.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:04.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:09:04.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:04.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 01:09:04.038 01:09:04.038 --- 10.0.0.2 ping statistics --- 01:09:04.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:04.038 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:09:04.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:04.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 01:09:04.038 01:09:04.038 --- 10.0.0.3 ping statistics --- 01:09:04.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:04.038 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:04.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:04.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:09:04.038 01:09:04.038 --- 10.0.0.1 ping statistics --- 01:09:04.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:04.038 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:09:04.038 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=89719 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 89719 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 89719 ']' 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:04.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:04.296 11:06:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:09:04.296 [2024-07-22 11:06:09.314629] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:04.296 [2024-07-22 11:06:09.314711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:04.296 [2024-07-22 11:06:09.458365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:04.554 [2024-07-22 11:06:09.533085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:04.554 [2024-07-22 11:06:09.533135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:04.554 [2024-07-22 11:06:09.533149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:04.554 [2024-07-22 11:06:09.533159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:04.554 [2024-07-22 11:06:09.533168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:04.554 [2024-07-22 11:06:09.533197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:09:05.488 [2024-07-22 11:06:10.654642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:09:05.488 ************************************ 01:09:05.488 START TEST lvs_grow_clean 01:09:05.488 ************************************ 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:05.488 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:05.746 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:09:06.004 11:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:09:06.004 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:09:06.262 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:06.262 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:06.262 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:09:06.262 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:09:06.262 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:09:06.262 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u acdff54f-732f-481a-a2f7-52a20bf000fb lvol 150 01:09:06.520 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d31115d8-401e-4dd7-8755-319c64ed71b5 01:09:06.520 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:06.520 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:09:06.778 [2024-07-22 11:06:11.897721] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:09:06.778 [2024-07-22 11:06:11.897793] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:09:06.778 true 01:09:06.778 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:09:06.778 11:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:07.036 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:09:07.036 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:09:07.294 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d31115d8-401e-4dd7-8755-319c64ed71b5 01:09:07.552 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:09:07.810 [2024-07-22 11:06:12.766182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89881 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89881 /var/tmp/bdevperf.sock 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 89881 ']' 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:09:07.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:07.810 11:06:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:09:08.068 [2024-07-22 11:06:13.032461] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:08.068 [2024-07-22 11:06:13.032545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89881 ] 01:09:08.068 [2024-07-22 11:06:13.170607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:08.068 [2024-07-22 11:06:13.269749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:09:08.999 11:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:08.999 11:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 01:09:08.999 11:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:09:09.256 Nvme0n1 01:09:09.256 11:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:09:09.514 [ 01:09:09.514 { 01:09:09.514 "aliases": [ 01:09:09.514 "d31115d8-401e-4dd7-8755-319c64ed71b5" 01:09:09.514 ], 01:09:09.514 "assigned_rate_limits": { 01:09:09.514 "r_mbytes_per_sec": 0, 01:09:09.514 "rw_ios_per_sec": 0, 01:09:09.514 "rw_mbytes_per_sec": 0, 01:09:09.514 "w_mbytes_per_sec": 0 01:09:09.514 }, 01:09:09.514 "block_size": 4096, 01:09:09.514 "claimed": false, 01:09:09.514 "driver_specific": { 01:09:09.514 "mp_policy": "active_passive", 01:09:09.514 "nvme": [ 01:09:09.514 { 01:09:09.514 "ctrlr_data": { 01:09:09.514 "ana_reporting": false, 01:09:09.514 "cntlid": 1, 01:09:09.514 "firmware_revision": "24.09", 01:09:09.514 "model_number": "SPDK bdev Controller", 01:09:09.514 "multi_ctrlr": true, 01:09:09.514 "oacs": { 01:09:09.514 "firmware": 0, 01:09:09.514 "format": 0, 01:09:09.514 "ns_manage": 0, 01:09:09.514 "security": 0 01:09:09.514 }, 01:09:09.514 "serial_number": "SPDK0", 01:09:09.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:09:09.514 "vendor_id": "0x8086" 01:09:09.514 }, 01:09:09.514 "ns_data": { 01:09:09.514 "can_share": true, 01:09:09.514 "id": 1 01:09:09.514 }, 01:09:09.514 "trid": { 01:09:09.514 "adrfam": "IPv4", 01:09:09.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:09:09.514 "traddr": "10.0.0.2", 01:09:09.514 "trsvcid": "4420", 01:09:09.514 "trtype": "TCP" 01:09:09.514 }, 01:09:09.514 "vs": { 01:09:09.514 "nvme_version": "1.3" 01:09:09.514 } 01:09:09.514 } 01:09:09.514 ] 01:09:09.514 }, 01:09:09.514 "memory_domains": [ 01:09:09.514 { 01:09:09.514 "dma_device_id": "system", 01:09:09.514 "dma_device_type": 1 01:09:09.514 } 01:09:09.514 ], 01:09:09.514 "name": "Nvme0n1", 01:09:09.514 "num_blocks": 38912, 01:09:09.514 "product_name": "NVMe disk", 01:09:09.514 "supported_io_types": { 01:09:09.514 "abort": true, 01:09:09.514 "compare": true, 01:09:09.514 "compare_and_write": true, 01:09:09.514 "copy": true, 01:09:09.514 "flush": true, 01:09:09.514 "get_zone_info": false, 01:09:09.514 "nvme_admin": true, 01:09:09.514 "nvme_io": true, 01:09:09.514 "nvme_io_md": false, 01:09:09.514 "nvme_iov_md": false, 01:09:09.515 "read": true, 01:09:09.515 "reset": true, 01:09:09.515 "seek_data": false, 01:09:09.515 "seek_hole": false, 01:09:09.515 "unmap": true, 01:09:09.515 "write": true, 01:09:09.515 "write_zeroes": true, 01:09:09.515 "zcopy": false, 01:09:09.515 "zone_append": false, 01:09:09.515 "zone_management": false 01:09:09.515 }, 01:09:09.515 "uuid": "d31115d8-401e-4dd7-8755-319c64ed71b5", 01:09:09.515 "zoned": false 01:09:09.515 } 01:09:09.515 ] 01:09:09.515 11:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:09:09.515 11:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89934 01:09:09.515 11:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:09:09.515 Running I/O for 10 seconds... 01:09:10.448 Latency(us) 01:09:10.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:10.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:10.448 Nvme0n1 : 1.00 8354.00 32.63 0.00 0.00 0.00 0.00 0.00 01:09:10.448 =================================================================================================================== 01:09:10.448 Total : 8354.00 32.63 0.00 0.00 0.00 0.00 0.00 01:09:10.448 01:09:11.381 11:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:11.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:11.639 Nvme0n1 : 2.00 7587.00 29.64 0.00 0.00 0.00 0.00 0.00 01:09:11.639 =================================================================================================================== 01:09:11.639 Total : 7587.00 29.64 0.00 0.00 0.00 0.00 0.00 01:09:11.639 01:09:11.898 true 01:09:11.898 11:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:11.898 11:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:09:12.156 11:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:09:12.156 11:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:09:12.156 11:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 89934 01:09:12.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:12.722 Nvme0n1 : 3.00 7400.67 28.91 0.00 0.00 0.00 0.00 0.00 01:09:12.722 =================================================================================================================== 01:09:12.722 Total : 7400.67 28.91 0.00 0.00 0.00 0.00 0.00 01:09:12.722 01:09:13.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:13.655 Nvme0n1 : 4.00 7411.25 28.95 0.00 0.00 0.00 0.00 0.00 01:09:13.655 =================================================================================================================== 01:09:13.655 Total : 7411.25 28.95 0.00 0.00 0.00 0.00 0.00 01:09:13.655 01:09:14.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:14.590 Nvme0n1 : 5.00 7340.00 28.67 0.00 0.00 0.00 0.00 0.00 01:09:14.590 =================================================================================================================== 01:09:14.590 Total : 7340.00 28.67 0.00 0.00 0.00 0.00 0.00 01:09:14.590 01:09:15.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:15.525 Nvme0n1 : 6.00 7370.50 28.79 0.00 0.00 0.00 0.00 0.00 01:09:15.525 =================================================================================================================== 01:09:15.525 Total : 7370.50 28.79 0.00 0.00 0.00 0.00 0.00 01:09:15.525 01:09:16.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:16.457 Nvme0n1 : 7.00 7371.43 28.79 0.00 0.00 0.00 0.00 0.00 01:09:16.457 =================================================================================================================== 01:09:16.457 Total : 7371.43 28.79 0.00 0.00 0.00 0.00 0.00 01:09:16.457 01:09:17.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:17.830 Nvme0n1 : 8.00 7330.38 28.63 0.00 0.00 0.00 0.00 0.00 01:09:17.830 =================================================================================================================== 01:09:17.830 Total : 7330.38 28.63 0.00 0.00 0.00 0.00 0.00 01:09:17.830 01:09:18.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:18.764 Nvme0n1 : 9.00 7303.11 28.53 0.00 0.00 0.00 0.00 0.00 01:09:18.764 =================================================================================================================== 01:09:18.764 Total : 7303.11 28.53 0.00 0.00 0.00 0.00 0.00 01:09:18.764 01:09:19.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:19.704 Nvme0n1 : 10.00 7298.00 28.51 0.00 0.00 0.00 0.00 0.00 01:09:19.704 =================================================================================================================== 01:09:19.704 Total : 7298.00 28.51 0.00 0.00 0.00 0.00 0.00 01:09:19.704 01:09:19.704 01:09:19.704 Latency(us) 01:09:19.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:19.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:19.704 Nvme0n1 : 10.02 7298.12 28.51 0.00 0.00 17534.17 7268.54 53143.74 01:09:19.704 =================================================================================================================== 01:09:19.704 Total : 7298.12 28.51 0.00 0.00 17534.17 7268.54 53143.74 01:09:19.704 0 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89881 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 89881 ']' 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 89881 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89881 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:09:19.704 killing process with pid 89881 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89881' 01:09:19.704 Received shutdown signal, test time was about 10.000000 seconds 01:09:19.704 01:09:19.704 Latency(us) 01:09:19.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:19.704 =================================================================================================================== 01:09:19.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 89881 01:09:19.704 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 89881 01:09:19.963 11:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:09:20.235 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:09:20.494 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:20.494 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:09:20.752 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:09:20.752 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:09:20.752 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:09:20.752 [2024-07-22 11:06:25.954596] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:09:21.010 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:21.010 11:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:09:21.010 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:21.010 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:09:21.010 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:21.269 2024/07/22 11:06:26 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:acdff54f-732f-481a-a2f7-52a20bf000fb], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:09:21.269 request: 01:09:21.269 { 01:09:21.269 "method": "bdev_lvol_get_lvstores", 01:09:21.269 "params": { 01:09:21.269 "uuid": "acdff54f-732f-481a-a2f7-52a20bf000fb" 01:09:21.269 } 01:09:21.269 } 01:09:21.269 Got JSON-RPC error response 01:09:21.269 GoRPCClient: error on JSON-RPC call 01:09:21.269 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 01:09:21.269 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:09:21.269 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:09:21.269 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:09:21.269 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:09:21.527 aio_bdev 01:09:21.528 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d31115d8-401e-4dd7-8755-319c64ed71b5 01:09:21.528 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d31115d8-401e-4dd7-8755-319c64ed71b5 01:09:21.528 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 01:09:21.528 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 01:09:21.528 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 01:09:21.528 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 01:09:21.528 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:09:21.786 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d31115d8-401e-4dd7-8755-319c64ed71b5 -t 2000 01:09:21.786 [ 01:09:21.786 { 01:09:21.786 "aliases": [ 01:09:21.786 "lvs/lvol" 01:09:21.786 ], 01:09:21.786 "assigned_rate_limits": { 01:09:21.786 "r_mbytes_per_sec": 0, 01:09:21.786 "rw_ios_per_sec": 0, 01:09:21.786 "rw_mbytes_per_sec": 0, 01:09:21.786 "w_mbytes_per_sec": 0 01:09:21.786 }, 01:09:21.786 "block_size": 4096, 01:09:21.786 "claimed": false, 01:09:21.786 "driver_specific": { 01:09:21.786 "lvol": { 01:09:21.786 "base_bdev": "aio_bdev", 01:09:21.786 "clone": false, 01:09:21.786 "esnap_clone": false, 01:09:21.786 "lvol_store_uuid": "acdff54f-732f-481a-a2f7-52a20bf000fb", 01:09:21.786 "num_allocated_clusters": 38, 01:09:21.786 "snapshot": false, 01:09:21.786 "thin_provision": false 01:09:21.786 } 01:09:21.786 }, 01:09:21.786 "name": "d31115d8-401e-4dd7-8755-319c64ed71b5", 01:09:21.786 "num_blocks": 38912, 01:09:21.786 "product_name": "Logical Volume", 01:09:21.786 "supported_io_types": { 01:09:21.786 "abort": false, 01:09:21.786 "compare": false, 01:09:21.786 "compare_and_write": false, 01:09:21.786 "copy": false, 01:09:21.786 "flush": false, 01:09:21.786 "get_zone_info": false, 01:09:21.786 "nvme_admin": false, 01:09:21.786 "nvme_io": false, 01:09:21.786 "nvme_io_md": false, 01:09:21.786 "nvme_iov_md": false, 01:09:21.786 "read": true, 01:09:21.786 "reset": true, 01:09:21.786 "seek_data": true, 01:09:21.786 "seek_hole": true, 01:09:21.786 "unmap": true, 01:09:21.786 "write": true, 01:09:21.786 "write_zeroes": true, 01:09:21.786 "zcopy": false, 01:09:21.786 "zone_append": false, 01:09:21.786 "zone_management": false 01:09:21.786 }, 01:09:21.786 "uuid": "d31115d8-401e-4dd7-8755-319c64ed71b5", 01:09:21.786 "zoned": false 01:09:21.786 } 01:09:21.786 ] 01:09:21.786 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 01:09:21.786 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:21.786 11:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:09:22.352 11:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:09:22.352 11:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:09:22.352 11:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:22.352 11:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:09:22.352 11:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d31115d8-401e-4dd7-8755-319c64ed71b5 01:09:22.611 11:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u acdff54f-732f-481a-a2f7-52a20bf000fb 01:09:22.875 11:06:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:09:23.147 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:23.740 ************************************ 01:09:23.740 END TEST lvs_grow_clean 01:09:23.740 ************************************ 01:09:23.740 01:09:23.740 real 0m17.987s 01:09:23.740 user 0m17.170s 01:09:23.740 sys 0m2.274s 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:09:23.740 ************************************ 01:09:23.740 START TEST lvs_grow_dirty 01:09:23.740 ************************************ 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:23.740 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:09:23.997 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:09:23.997 11:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:09:24.255 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:24.255 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:24.255 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:09:24.512 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:09:24.512 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:09:24.512 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 lvol 150 01:09:24.769 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9727a2e6-f0bb-42d0-abc7-d45086f70fe8 01:09:24.769 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:24.769 11:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:09:25.027 [2024-07-22 11:06:30.104933] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:09:25.027 [2024-07-22 11:06:30.105080] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:09:25.027 true 01:09:25.027 11:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:25.027 11:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:09:25.285 11:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:09:25.285 11:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:09:25.543 11:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9727a2e6-f0bb-42d0-abc7-d45086f70fe8 01:09:25.543 11:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:09:25.801 [2024-07-22 11:06:30.897544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:25.801 11:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90326 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90326 /var/tmp/bdevperf.sock 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 90326 ']' 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:09:26.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:26.059 11:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:09:26.059 [2024-07-22 11:06:31.178837] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:26.059 [2024-07-22 11:06:31.179019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90326 ] 01:09:26.317 [2024-07-22 11:06:31.323412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:26.317 [2024-07-22 11:06:31.439473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:09:27.253 11:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:27.254 11:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 01:09:27.254 11:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:09:27.254 Nvme0n1 01:09:27.254 11:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:09:27.821 [ 01:09:27.821 { 01:09:27.821 "aliases": [ 01:09:27.821 "9727a2e6-f0bb-42d0-abc7-d45086f70fe8" 01:09:27.821 ], 01:09:27.821 "assigned_rate_limits": { 01:09:27.821 "r_mbytes_per_sec": 0, 01:09:27.821 "rw_ios_per_sec": 0, 01:09:27.821 "rw_mbytes_per_sec": 0, 01:09:27.821 "w_mbytes_per_sec": 0 01:09:27.821 }, 01:09:27.821 "block_size": 4096, 01:09:27.821 "claimed": false, 01:09:27.821 "driver_specific": { 01:09:27.821 "mp_policy": "active_passive", 01:09:27.821 "nvme": [ 01:09:27.821 { 01:09:27.821 "ctrlr_data": { 01:09:27.821 "ana_reporting": false, 01:09:27.821 "cntlid": 1, 01:09:27.821 "firmware_revision": "24.09", 01:09:27.821 "model_number": "SPDK bdev Controller", 01:09:27.821 "multi_ctrlr": true, 01:09:27.821 "oacs": { 01:09:27.821 "firmware": 0, 01:09:27.821 "format": 0, 01:09:27.821 "ns_manage": 0, 01:09:27.821 "security": 0 01:09:27.821 }, 01:09:27.821 "serial_number": "SPDK0", 01:09:27.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:09:27.821 "vendor_id": "0x8086" 01:09:27.821 }, 01:09:27.821 "ns_data": { 01:09:27.821 "can_share": true, 01:09:27.821 "id": 1 01:09:27.821 }, 01:09:27.821 "trid": { 01:09:27.821 "adrfam": "IPv4", 01:09:27.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:09:27.821 "traddr": "10.0.0.2", 01:09:27.821 "trsvcid": "4420", 01:09:27.821 "trtype": "TCP" 01:09:27.821 }, 01:09:27.821 "vs": { 01:09:27.821 "nvme_version": "1.3" 01:09:27.821 } 01:09:27.821 } 01:09:27.821 ] 01:09:27.821 }, 01:09:27.821 "memory_domains": [ 01:09:27.821 { 01:09:27.821 "dma_device_id": "system", 01:09:27.821 "dma_device_type": 1 01:09:27.821 } 01:09:27.821 ], 01:09:27.821 "name": "Nvme0n1", 01:09:27.821 "num_blocks": 38912, 01:09:27.821 "product_name": "NVMe disk", 01:09:27.821 "supported_io_types": { 01:09:27.821 "abort": true, 01:09:27.821 "compare": true, 01:09:27.821 "compare_and_write": true, 01:09:27.821 "copy": true, 01:09:27.821 "flush": true, 01:09:27.821 "get_zone_info": false, 01:09:27.821 "nvme_admin": true, 01:09:27.821 "nvme_io": true, 01:09:27.821 "nvme_io_md": false, 01:09:27.821 "nvme_iov_md": false, 01:09:27.821 "read": true, 01:09:27.821 "reset": true, 01:09:27.821 "seek_data": false, 01:09:27.821 "seek_hole": false, 01:09:27.821 "unmap": true, 01:09:27.821 "write": true, 01:09:27.821 "write_zeroes": true, 01:09:27.821 "zcopy": false, 01:09:27.821 "zone_append": false, 01:09:27.821 "zone_management": false 01:09:27.821 }, 01:09:27.821 "uuid": "9727a2e6-f0bb-42d0-abc7-d45086f70fe8", 01:09:27.821 "zoned": false 01:09:27.821 } 01:09:27.821 ] 01:09:27.821 11:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90368 01:09:27.821 11:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:09:27.821 11:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:09:27.821 Running I/O for 10 seconds... 01:09:28.755 Latency(us) 01:09:28.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:28.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:28.755 Nvme0n1 : 1.00 8812.00 34.42 0.00 0.00 0.00 0.00 0.00 01:09:28.755 =================================================================================================================== 01:09:28.755 Total : 8812.00 34.42 0.00 0.00 0.00 0.00 0.00 01:09:28.755 01:09:29.706 11:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:29.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:29.706 Nvme0n1 : 2.00 9032.50 35.28 0.00 0.00 0.00 0.00 0.00 01:09:29.706 =================================================================================================================== 01:09:29.706 Total : 9032.50 35.28 0.00 0.00 0.00 0.00 0.00 01:09:29.706 01:09:29.965 true 01:09:29.965 11:06:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:09:29.965 11:06:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:30.224 11:06:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:09:30.224 11:06:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:09:30.224 11:06:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 90368 01:09:30.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:30.790 Nvme0n1 : 3.00 9036.33 35.30 0.00 0.00 0.00 0.00 0.00 01:09:30.790 =================================================================================================================== 01:09:30.790 Total : 9036.33 35.30 0.00 0.00 0.00 0.00 0.00 01:09:30.790 01:09:31.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:31.724 Nvme0n1 : 4.00 9022.75 35.25 0.00 0.00 0.00 0.00 0.00 01:09:31.724 =================================================================================================================== 01:09:31.724 Total : 9022.75 35.25 0.00 0.00 0.00 0.00 0.00 01:09:31.724 01:09:33.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:33.097 Nvme0n1 : 5.00 8979.00 35.07 0.00 0.00 0.00 0.00 0.00 01:09:33.097 =================================================================================================================== 01:09:33.097 Total : 8979.00 35.07 0.00 0.00 0.00 0.00 0.00 01:09:33.097 01:09:33.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:33.663 Nvme0n1 : 6.00 8972.50 35.05 0.00 0.00 0.00 0.00 0.00 01:09:33.663 =================================================================================================================== 01:09:33.663 Total : 8972.50 35.05 0.00 0.00 0.00 0.00 0.00 01:09:33.663 01:09:35.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:35.038 Nvme0n1 : 7.00 8956.00 34.98 0.00 0.00 0.00 0.00 0.00 01:09:35.038 =================================================================================================================== 01:09:35.038 Total : 8956.00 34.98 0.00 0.00 0.00 0.00 0.00 01:09:35.038 01:09:35.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:35.972 Nvme0n1 : 8.00 8723.50 34.08 0.00 0.00 0.00 0.00 0.00 01:09:35.972 =================================================================================================================== 01:09:35.972 Total : 8723.50 34.08 0.00 0.00 0.00 0.00 0.00 01:09:35.972 01:09:36.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:36.906 Nvme0n1 : 9.00 8719.11 34.06 0.00 0.00 0.00 0.00 0.00 01:09:36.906 =================================================================================================================== 01:09:36.906 Total : 8719.11 34.06 0.00 0.00 0.00 0.00 0.00 01:09:36.906 01:09:37.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:37.861 Nvme0n1 : 10.00 8728.70 34.10 0.00 0.00 0.00 0.00 0.00 01:09:37.861 =================================================================================================================== 01:09:37.861 Total : 8728.70 34.10 0.00 0.00 0.00 0.00 0.00 01:09:37.861 01:09:37.861 01:09:37.861 Latency(us) 01:09:37.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:37.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:09:37.861 Nvme0n1 : 10.01 8728.80 34.10 0.00 0.00 14658.80 6881.28 210668.45 01:09:37.861 =================================================================================================================== 01:09:37.861 Total : 8728.80 34.10 0.00 0.00 14658.80 6881.28 210668.45 01:09:37.861 0 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90326 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 90326 ']' 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 90326 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90326 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:09:37.861 killing process with pid 90326 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90326' 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 90326 01:09:37.861 Received shutdown signal, test time was about 10.000000 seconds 01:09:37.861 01:09:37.861 Latency(us) 01:09:37.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:37.861 =================================================================================================================== 01:09:37.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:09:37.861 11:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 90326 01:09:38.119 11:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:09:38.376 11:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:09:38.634 11:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:38.634 11:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 89719 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 89719 01:09:38.891 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 89719 Killed "${NVMF_APP[@]}" "$@" 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:38.891 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=90537 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 90537 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 90537 ']' 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:39.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:39.149 11:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:09:39.149 [2024-07-22 11:06:44.165473] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:39.149 [2024-07-22 11:06:44.165589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:39.149 [2024-07-22 11:06:44.300697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:39.407 [2024-07-22 11:06:44.415474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:39.407 [2024-07-22 11:06:44.415589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:39.407 [2024-07-22 11:06:44.415601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:39.407 [2024-07-22 11:06:44.415610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:39.407 [2024-07-22 11:06:44.415617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:39.407 [2024-07-22 11:06:44.415646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:09:39.973 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:39.973 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 01:09:39.973 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:09:39.973 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:39.973 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:09:39.973 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:39.973 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:09:40.231 [2024-07-22 11:06:45.346478] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 01:09:40.231 [2024-07-22 11:06:45.347227] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:09:40.231 [2024-07-22 11:06:45.347564] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9727a2e6-f0bb-42d0-abc7-d45086f70fe8 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9727a2e6-f0bb-42d0-abc7-d45086f70fe8 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 01:09:40.231 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:09:40.488 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9727a2e6-f0bb-42d0-abc7-d45086f70fe8 -t 2000 01:09:40.745 [ 01:09:40.745 { 01:09:40.745 "aliases": [ 01:09:40.745 "lvs/lvol" 01:09:40.745 ], 01:09:40.745 "assigned_rate_limits": { 01:09:40.745 "r_mbytes_per_sec": 0, 01:09:40.745 "rw_ios_per_sec": 0, 01:09:40.745 "rw_mbytes_per_sec": 0, 01:09:40.745 "w_mbytes_per_sec": 0 01:09:40.745 }, 01:09:40.745 "block_size": 4096, 01:09:40.745 "claimed": false, 01:09:40.745 "driver_specific": { 01:09:40.745 "lvol": { 01:09:40.745 "base_bdev": "aio_bdev", 01:09:40.745 "clone": false, 01:09:40.745 "esnap_clone": false, 01:09:40.745 "lvol_store_uuid": "0b8cc9d4-ebf3-484e-949b-6f854d8adfb8", 01:09:40.745 "num_allocated_clusters": 38, 01:09:40.745 "snapshot": false, 01:09:40.745 "thin_provision": false 01:09:40.745 } 01:09:40.745 }, 01:09:40.745 "name": "9727a2e6-f0bb-42d0-abc7-d45086f70fe8", 01:09:40.745 "num_blocks": 38912, 01:09:40.745 "product_name": "Logical Volume", 01:09:40.745 "supported_io_types": { 01:09:40.745 "abort": false, 01:09:40.745 "compare": false, 01:09:40.745 "compare_and_write": false, 01:09:40.745 "copy": false, 01:09:40.745 "flush": false, 01:09:40.745 "get_zone_info": false, 01:09:40.745 "nvme_admin": false, 01:09:40.745 "nvme_io": false, 01:09:40.745 "nvme_io_md": false, 01:09:40.745 "nvme_iov_md": false, 01:09:40.745 "read": true, 01:09:40.745 "reset": true, 01:09:40.745 "seek_data": true, 01:09:40.745 "seek_hole": true, 01:09:40.745 "unmap": true, 01:09:40.746 "write": true, 01:09:40.746 "write_zeroes": true, 01:09:40.746 "zcopy": false, 01:09:40.746 "zone_append": false, 01:09:40.746 "zone_management": false 01:09:40.746 }, 01:09:40.746 "uuid": "9727a2e6-f0bb-42d0-abc7-d45086f70fe8", 01:09:40.746 "zoned": false 01:09:40.746 } 01:09:40.746 ] 01:09:41.004 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 01:09:41.004 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:41.004 11:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:09:41.260 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:09:41.261 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:41.261 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:09:41.517 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:09:41.517 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:09:41.517 [2024-07-22 11:06:46.683489] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:09:41.774 11:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:42.031 2024/07/22 11:06:47 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0b8cc9d4-ebf3-484e-949b-6f854d8adfb8], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:09:42.031 request: 01:09:42.031 { 01:09:42.031 "method": "bdev_lvol_get_lvstores", 01:09:42.031 "params": { 01:09:42.031 "uuid": "0b8cc9d4-ebf3-484e-949b-6f854d8adfb8" 01:09:42.031 } 01:09:42.031 } 01:09:42.031 Got JSON-RPC error response 01:09:42.031 GoRPCClient: error on JSON-RPC call 01:09:42.031 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 01:09:42.031 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:09:42.031 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:09:42.031 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:09:42.031 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:09:42.287 aio_bdev 01:09:42.287 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9727a2e6-f0bb-42d0-abc7-d45086f70fe8 01:09:42.287 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9727a2e6-f0bb-42d0-abc7-d45086f70fe8 01:09:42.287 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 01:09:42.287 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 01:09:42.287 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 01:09:42.287 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 01:09:42.287 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:09:42.550 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9727a2e6-f0bb-42d0-abc7-d45086f70fe8 -t 2000 01:09:42.550 [ 01:09:42.550 { 01:09:42.550 "aliases": [ 01:09:42.550 "lvs/lvol" 01:09:42.550 ], 01:09:42.550 "assigned_rate_limits": { 01:09:42.550 "r_mbytes_per_sec": 0, 01:09:42.550 "rw_ios_per_sec": 0, 01:09:42.550 "rw_mbytes_per_sec": 0, 01:09:42.550 "w_mbytes_per_sec": 0 01:09:42.550 }, 01:09:42.550 "block_size": 4096, 01:09:42.550 "claimed": false, 01:09:42.550 "driver_specific": { 01:09:42.550 "lvol": { 01:09:42.550 "base_bdev": "aio_bdev", 01:09:42.550 "clone": false, 01:09:42.550 "esnap_clone": false, 01:09:42.550 "lvol_store_uuid": "0b8cc9d4-ebf3-484e-949b-6f854d8adfb8", 01:09:42.550 "num_allocated_clusters": 38, 01:09:42.550 "snapshot": false, 01:09:42.550 "thin_provision": false 01:09:42.550 } 01:09:42.550 }, 01:09:42.550 "name": "9727a2e6-f0bb-42d0-abc7-d45086f70fe8", 01:09:42.550 "num_blocks": 38912, 01:09:42.550 "product_name": "Logical Volume", 01:09:42.550 "supported_io_types": { 01:09:42.550 "abort": false, 01:09:42.550 "compare": false, 01:09:42.550 "compare_and_write": false, 01:09:42.550 "copy": false, 01:09:42.550 "flush": false, 01:09:42.550 "get_zone_info": false, 01:09:42.550 "nvme_admin": false, 01:09:42.550 "nvme_io": false, 01:09:42.550 "nvme_io_md": false, 01:09:42.550 "nvme_iov_md": false, 01:09:42.550 "read": true, 01:09:42.550 "reset": true, 01:09:42.550 "seek_data": true, 01:09:42.550 "seek_hole": true, 01:09:42.550 "unmap": true, 01:09:42.550 "write": true, 01:09:42.550 "write_zeroes": true, 01:09:42.550 "zcopy": false, 01:09:42.550 "zone_append": false, 01:09:42.550 "zone_management": false 01:09:42.550 }, 01:09:42.550 "uuid": "9727a2e6-f0bb-42d0-abc7-d45086f70fe8", 01:09:42.550 "zoned": false 01:09:42.550 } 01:09:42.551 ] 01:09:42.551 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 01:09:42.551 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:42.551 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:09:42.807 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:09:42.807 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:42.807 11:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:09:43.064 11:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:09:43.064 11:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9727a2e6-f0bb-42d0-abc7-d45086f70fe8 01:09:43.321 11:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b8cc9d4-ebf3-484e-949b-6f854d8adfb8 01:09:43.577 11:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:09:43.834 11:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:09:44.092 01:09:44.092 real 0m20.561s 01:09:44.092 user 0m41.433s 01:09:44.092 sys 0m9.230s 01:09:44.092 11:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:44.092 11:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:09:44.092 ************************************ 01:09:44.092 END TEST lvs_grow_dirty 01:09:44.092 ************************************ 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:09:44.350 nvmf_trace.0 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 01:09:44.350 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:09:44.609 rmmod nvme_tcp 01:09:44.609 rmmod nvme_fabrics 01:09:44.609 rmmod nvme_keyring 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 90537 ']' 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 90537 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 90537 ']' 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 90537 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90537 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:09:44.609 killing process with pid 90537 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90537' 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 90537 01:09:44.609 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 90537 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:09:44.868 ************************************ 01:09:44.868 END TEST nvmf_lvs_grow 01:09:44.868 ************************************ 01:09:44.868 01:09:44.868 real 0m41.139s 01:09:44.868 user 1m4.946s 01:09:44.868 sys 0m12.344s 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:44.868 11:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:09:44.868 11:06:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:09:44.868 11:06:49 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:09:44.868 11:06:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:09:44.868 11:06:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:44.868 11:06:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:44.868 ************************************ 01:09:44.868 START TEST nvmf_bdev_io_wait 01:09:44.868 ************************************ 01:09:44.868 11:06:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:09:44.868 * Looking for test storage... 01:09:44.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:44.868 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:44.869 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:09:45.126 Cannot find device "nvmf_tgt_br" 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:09:45.126 Cannot find device "nvmf_tgt_br2" 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:09:45.126 Cannot find device "nvmf_tgt_br" 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 01:09:45.126 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:09:45.126 Cannot find device "nvmf_tgt_br2" 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:45.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:45.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:45.127 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:09:45.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:45.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 01:09:45.385 01:09:45.385 --- 10.0.0.2 ping statistics --- 01:09:45.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:45.385 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:09:45.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:45.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:09:45.385 01:09:45.385 --- 10.0.0.3 ping statistics --- 01:09:45.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:45.385 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:45.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:45.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 01:09:45.385 01:09:45.385 --- 10.0.0.1 ping statistics --- 01:09:45.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:45.385 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=90951 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 90951 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 90951 ']' 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:45.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:45.385 11:06:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:45.385 [2024-07-22 11:06:50.501540] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:45.385 [2024-07-22 11:06:50.501642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:45.644 [2024-07-22 11:06:50.647545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:09:45.644 [2024-07-22 11:06:50.752501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:45.644 [2024-07-22 11:06:50.752773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:45.644 [2024-07-22 11:06:50.752973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:45.644 [2024-07-22 11:06:50.753138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:45.644 [2024-07-22 11:06:50.753184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:45.644 [2024-07-22 11:06:50.753450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:09:45.644 [2024-07-22 11:06:50.754100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:09:45.644 [2024-07-22 11:06:50.754254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:09:45.644 [2024-07-22 11:06:50.754354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 [2024-07-22 11:06:51.648400] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 Malloc0 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:46.579 [2024-07-22 11:06:51.721164] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=91004 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=91006 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=91008 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:09:46.579 { 01:09:46.579 "params": { 01:09:46.579 "name": "Nvme$subsystem", 01:09:46.579 "trtype": "$TEST_TRANSPORT", 01:09:46.579 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:46.579 "adrfam": "ipv4", 01:09:46.579 "trsvcid": "$NVMF_PORT", 01:09:46.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:46.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:46.579 "hdgst": ${hdgst:-false}, 01:09:46.579 "ddgst": ${ddgst:-false} 01:09:46.579 }, 01:09:46.579 "method": "bdev_nvme_attach_controller" 01:09:46.579 } 01:09:46.579 EOF 01:09:46.579 )") 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=91010 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:09:46.579 { 01:09:46.579 "params": { 01:09:46.579 "name": "Nvme$subsystem", 01:09:46.579 "trtype": "$TEST_TRANSPORT", 01:09:46.579 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:46.579 "adrfam": "ipv4", 01:09:46.579 "trsvcid": "$NVMF_PORT", 01:09:46.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:46.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:46.579 "hdgst": ${hdgst:-false}, 01:09:46.579 "ddgst": ${ddgst:-false} 01:09:46.579 }, 01:09:46.579 "method": "bdev_nvme_attach_controller" 01:09:46.579 } 01:09:46.579 EOF 01:09:46.579 )") 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:09:46.579 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:09:46.579 { 01:09:46.579 "params": { 01:09:46.579 "name": "Nvme$subsystem", 01:09:46.579 "trtype": "$TEST_TRANSPORT", 01:09:46.579 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:46.579 "adrfam": "ipv4", 01:09:46.580 "trsvcid": "$NVMF_PORT", 01:09:46.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:46.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:46.580 "hdgst": ${hdgst:-false}, 01:09:46.580 "ddgst": ${ddgst:-false} 01:09:46.580 }, 01:09:46.580 "method": "bdev_nvme_attach_controller" 01:09:46.580 } 01:09:46.580 EOF 01:09:46.580 )") 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:09:46.580 { 01:09:46.580 "params": { 01:09:46.580 "name": "Nvme$subsystem", 01:09:46.580 "trtype": "$TEST_TRANSPORT", 01:09:46.580 "traddr": "$NVMF_FIRST_TARGET_IP", 01:09:46.580 "adrfam": "ipv4", 01:09:46.580 "trsvcid": "$NVMF_PORT", 01:09:46.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:09:46.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:09:46.580 "hdgst": ${hdgst:-false}, 01:09:46.580 "ddgst": ${ddgst:-false} 01:09:46.580 }, 01:09:46.580 "method": "bdev_nvme_attach_controller" 01:09:46.580 } 01:09:46.580 EOF 01:09:46.580 )") 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:09:46.580 "params": { 01:09:46.580 "name": "Nvme1", 01:09:46.580 "trtype": "tcp", 01:09:46.580 "traddr": "10.0.0.2", 01:09:46.580 "adrfam": "ipv4", 01:09:46.580 "trsvcid": "4420", 01:09:46.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:09:46.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:09:46.580 "hdgst": false, 01:09:46.580 "ddgst": false 01:09:46.580 }, 01:09:46.580 "method": "bdev_nvme_attach_controller" 01:09:46.580 }' 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:09:46.580 "params": { 01:09:46.580 "name": "Nvme1", 01:09:46.580 "trtype": "tcp", 01:09:46.580 "traddr": "10.0.0.2", 01:09:46.580 "adrfam": "ipv4", 01:09:46.580 "trsvcid": "4420", 01:09:46.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:09:46.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:09:46.580 "hdgst": false, 01:09:46.580 "ddgst": false 01:09:46.580 }, 01:09:46.580 "method": "bdev_nvme_attach_controller" 01:09:46.580 }' 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:09:46.580 "params": { 01:09:46.580 "name": "Nvme1", 01:09:46.580 "trtype": "tcp", 01:09:46.580 "traddr": "10.0.0.2", 01:09:46.580 "adrfam": "ipv4", 01:09:46.580 "trsvcid": "4420", 01:09:46.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:09:46.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:09:46.580 "hdgst": false, 01:09:46.580 "ddgst": false 01:09:46.580 }, 01:09:46.580 "method": "bdev_nvme_attach_controller" 01:09:46.580 }' 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 01:09:46.580 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:09:46.580 "params": { 01:09:46.580 "name": "Nvme1", 01:09:46.580 "trtype": "tcp", 01:09:46.580 "traddr": "10.0.0.2", 01:09:46.580 "adrfam": "ipv4", 01:09:46.580 "trsvcid": "4420", 01:09:46.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:09:46.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:09:46.580 "hdgst": false, 01:09:46.580 "ddgst": false 01:09:46.580 }, 01:09:46.580 "method": "bdev_nvme_attach_controller" 01:09:46.580 }' 01:09:46.837 [2024-07-22 11:06:51.786842] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:46.837 [2024-07-22 11:06:51.787180] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:09:46.837 [2024-07-22 11:06:51.799156] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:46.837 [2024-07-22 11:06:51.799238] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:09:46.837 [2024-07-22 11:06:51.803342] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:46.837 [2024-07-22 11:06:51.803416] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:09:46.837 11:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 91004 01:09:46.837 [2024-07-22 11:06:51.815222] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:46.837 [2024-07-22 11:06:51.815300] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:09:46.837 [2024-07-22 11:06:52.000193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:47.096 [2024-07-22 11:06:52.072900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 01:09:47.096 [2024-07-22 11:06:52.073564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:47.096 [2024-07-22 11:06:52.148540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:47.096 [2024-07-22 11:06:52.152899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:09:47.096 [2024-07-22 11:06:52.229127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:47.096 [2024-07-22 11:06:52.230003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 01:09:47.096 Running I/O for 1 seconds... 01:09:47.096 Running I/O for 1 seconds... 01:09:47.353 [2024-07-22 11:06:52.310042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 01:09:47.353 Running I/O for 1 seconds... 01:09:47.353 Running I/O for 1 seconds... 01:09:48.298 01:09:48.298 Latency(us) 01:09:48.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:48.298 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:09:48.298 Nvme1n1 : 1.00 192782.84 753.06 0.00 0.00 661.24 288.58 1124.54 01:09:48.298 =================================================================================================================== 01:09:48.298 Total : 192782.84 753.06 0.00 0.00 661.24 288.58 1124.54 01:09:48.298 01:09:48.298 Latency(us) 01:09:48.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:48.298 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:09:48.298 Nvme1n1 : 1.01 7605.85 29.71 0.00 0.00 16742.17 7089.80 19541.64 01:09:48.298 =================================================================================================================== 01:09:48.299 Total : 7605.85 29.71 0.00 0.00 16742.17 7089.80 19541.64 01:09:48.299 01:09:48.299 Latency(us) 01:09:48.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:48.299 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:09:48.299 Nvme1n1 : 1.01 5980.15 23.36 0.00 0.00 21269.09 11736.90 34793.66 01:09:48.299 =================================================================================================================== 01:09:48.299 Total : 5980.15 23.36 0.00 0.00 21269.09 11736.90 34793.66 01:09:48.299 01:09:48.299 Latency(us) 01:09:48.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:48.299 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:09:48.299 Nvme1n1 : 1.01 7037.99 27.49 0.00 0.00 18108.66 4289.63 27525.12 01:09:48.299 =================================================================================================================== 01:09:48.299 Total : 7037.99 27.49 0.00 0.00 18108.66 4289.63 27525.12 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 91006 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 91008 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 91010 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 01:09:48.556 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:09:48.813 rmmod nvme_tcp 01:09:48.813 rmmod nvme_fabrics 01:09:48.813 rmmod nvme_keyring 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 90951 ']' 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 90951 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 90951 ']' 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 90951 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90951 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:09:48.813 killing process with pid 90951 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90951' 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 90951 01:09:48.813 11:06:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 90951 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:09:49.071 01:09:49.071 real 0m4.101s 01:09:49.071 user 0m18.013s 01:09:49.071 sys 0m2.072s 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 01:09:49.071 ************************************ 01:09:49.071 END TEST nvmf_bdev_io_wait 01:09:49.071 11:06:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:09:49.071 ************************************ 01:09:49.071 11:06:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:09:49.071 11:06:54 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:09:49.071 11:06:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:09:49.071 11:06:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:09:49.071 11:06:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:49.071 ************************************ 01:09:49.071 START TEST nvmf_queue_depth 01:09:49.071 ************************************ 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:09:49.071 * Looking for test storage... 01:09:49.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:09:49.071 Cannot find device "nvmf_tgt_br" 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:09:49.071 Cannot find device "nvmf_tgt_br2" 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:09:49.071 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:09:49.071 Cannot find device "nvmf_tgt_br" 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:09:49.329 Cannot find device "nvmf_tgt_br2" 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:49.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:49.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:09:49.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:49.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 01:09:49.329 01:09:49.329 --- 10.0.0.2 ping statistics --- 01:09:49.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:49.329 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:09:49.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:49.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 01:09:49.329 01:09:49.329 --- 10.0.0.3 ping statistics --- 01:09:49.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:49.329 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:49.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:49.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:09:49.329 01:09:49.329 --- 10.0.0.1 ping statistics --- 01:09:49.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:49.329 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:09:49.329 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=91242 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 91242 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91242 ']' 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:49.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:49.599 11:06:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:49.599 [2024-07-22 11:06:54.612075] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:49.599 [2024-07-22 11:06:54.612164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:49.599 [2024-07-22 11:06:54.755014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:49.858 [2024-07-22 11:06:54.856859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:49.858 [2024-07-22 11:06:54.856917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:49.858 [2024-07-22 11:06:54.856927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:49.858 [2024-07-22 11:06:54.856935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:49.858 [2024-07-22 11:06:54.856941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:49.858 [2024-07-22 11:06:54.856978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:50.791 [2024-07-22 11:06:55.698627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:50.791 Malloc0 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:50.791 [2024-07-22 11:06:55.760175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=91292 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 91292 /var/tmp/bdevperf.sock 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91292 ']' 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 01:09:50.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 01:09:50.791 11:06:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:50.791 [2024-07-22 11:06:55.821111] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:09:50.791 [2024-07-22 11:06:55.821210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91292 ] 01:09:50.791 [2024-07-22 11:06:55.963315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:51.049 [2024-07-22 11:06:56.054549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:09:51.982 11:06:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:09:51.982 11:06:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 01:09:51.982 11:06:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:09:51.982 11:06:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 01:09:51.982 11:06:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:09:51.982 NVMe0n1 01:09:51.982 11:06:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:09:51.982 11:06:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:09:51.982 Running I/O for 10 seconds... 01:10:01.959 01:10:01.959 Latency(us) 01:10:01.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:01.959 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:10:01.959 Verification LBA range: start 0x0 length 0x4000 01:10:01.959 NVMe0n1 : 10.09 9330.17 36.45 0.00 0.00 109325.71 22639.71 86745.83 01:10:01.959 =================================================================================================================== 01:10:01.959 Total : 9330.17 36.45 0.00 0.00 109325.71 22639.71 86745.83 01:10:01.959 0 01:10:01.959 11:07:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 91292 01:10:01.959 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91292 ']' 01:10:01.959 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91292 01:10:01.959 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 01:10:01.959 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:01.960 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91292 01:10:01.960 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:10:01.960 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:10:01.960 killing process with pid 91292 01:10:01.960 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91292' 01:10:01.960 Received shutdown signal, test time was about 10.000000 seconds 01:10:01.960 01:10:01.960 Latency(us) 01:10:01.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:01.960 =================================================================================================================== 01:10:01.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:10:01.960 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91292 01:10:01.960 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91292 01:10:02.218 11:07:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:10:02.218 11:07:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:10:02.218 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 01:10:02.218 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:10:02.475 rmmod nvme_tcp 01:10:02.475 rmmod nvme_fabrics 01:10:02.475 rmmod nvme_keyring 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 91242 ']' 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 91242 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91242 ']' 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91242 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91242 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:10:02.475 killing process with pid 91242 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91242' 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91242 01:10:02.475 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91242 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:10:02.733 01:10:02.733 real 0m13.695s 01:10:02.733 user 0m23.307s 01:10:02.733 sys 0m2.363s 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:02.733 11:07:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:10:02.733 ************************************ 01:10:02.733 END TEST nvmf_queue_depth 01:10:02.733 ************************************ 01:10:02.733 11:07:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:10:02.733 11:07:07 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:10:02.733 11:07:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:10:02.733 11:07:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:02.733 11:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:02.733 ************************************ 01:10:02.733 START TEST nvmf_target_multipath 01:10:02.733 ************************************ 01:10:02.733 11:07:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:10:02.733 * Looking for test storage... 01:10:02.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:02.733 11:07:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:02.733 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:10:02.991 11:07:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:10:02.991 Cannot find device "nvmf_tgt_br" 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:10:02.991 Cannot find device "nvmf_tgt_br2" 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:10:02.991 Cannot find device "nvmf_tgt_br" 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:10:02.991 Cannot find device "nvmf_tgt_br2" 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:02.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:02.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:10:02.991 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:10:03.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:03.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 01:10:03.249 01:10:03.249 --- 10.0.0.2 ping statistics --- 01:10:03.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:03.249 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:10:03.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:03.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 01:10:03.249 01:10:03.249 --- 10.0.0.3 ping statistics --- 01:10:03.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:03.249 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:03.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:03.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:10:03.249 01:10:03.249 --- 10.0.0.1 ping statistics --- 01:10:03.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:03.249 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=91618 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 91618 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 91618 ']' 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:03.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:03.249 11:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:10:03.249 [2024-07-22 11:07:08.379801] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:03.249 [2024-07-22 11:07:08.379892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:03.507 [2024-07-22 11:07:08.514279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:10:03.507 [2024-07-22 11:07:08.595326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:03.507 [2024-07-22 11:07:08.595716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:03.507 [2024-07-22 11:07:08.595872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:03.507 [2024-07-22 11:07:08.596179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:03.507 [2024-07-22 11:07:08.596193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:03.507 [2024-07-22 11:07:08.596311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:10:03.507 [2024-07-22 11:07:08.596867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:10:03.507 [2024-07-22 11:07:08.596953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:10:03.507 [2024-07-22 11:07:08.596977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:10:04.441 11:07:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:04.441 11:07:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 01:10:04.441 11:07:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:04.441 11:07:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:04.441 11:07:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:10:04.441 11:07:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:04.441 11:07:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:10:04.441 [2024-07-22 11:07:09.616775] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:04.699 11:07:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:10:04.699 Malloc0 01:10:04.956 11:07:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:10:04.956 11:07:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:10:05.213 11:07:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:05.778 [2024-07-22 11:07:10.679213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:05.778 11:07:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:10:05.778 [2024-07-22 11:07:10.939488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:10:05.778 11:07:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 01:10:06.036 11:07:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:10:06.294 11:07:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:10:06.294 11:07:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 01:10:06.294 11:07:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:10:06.294 11:07:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:10:06.294 11:07:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 01:10:08.192 11:07:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:10:08.192 11:07:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:10:08.192 11:07:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:10:08.449 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=91761 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:10:08.450 11:07:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:10:08.450 [global] 01:10:08.450 thread=1 01:10:08.450 invalidate=1 01:10:08.450 rw=randrw 01:10:08.450 time_based=1 01:10:08.450 runtime=6 01:10:08.450 ioengine=libaio 01:10:08.450 direct=1 01:10:08.450 bs=4096 01:10:08.450 iodepth=128 01:10:08.450 norandommap=0 01:10:08.450 numjobs=1 01:10:08.450 01:10:08.450 verify_dump=1 01:10:08.450 verify_backlog=512 01:10:08.450 verify_state_save=0 01:10:08.450 do_verify=1 01:10:08.450 verify=crc32c-intel 01:10:08.450 [job0] 01:10:08.450 filename=/dev/nvme0n1 01:10:08.450 Could not set queue depth (nvme0n1) 01:10:08.450 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:10:08.450 fio-3.35 01:10:08.450 Starting 1 thread 01:10:09.384 11:07:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:10:09.642 11:07:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:10:09.901 11:07:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:10:10.858 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:10:10.858 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:10.858 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:10:10.858 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:10:11.422 11:07:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:10:12.795 11:07:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:10:12.795 11:07:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:12.795 11:07:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:10:12.795 11:07:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 91761 01:10:14.694 01:10:14.694 job0: (groupid=0, jobs=1): err= 0: pid=91782: Mon Jul 22 11:07:19 2024 01:10:14.694 read: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(251MiB/6006msec) 01:10:14.694 slat (usec): min=3, max=5160, avg=52.87, stdev=239.86 01:10:14.694 clat (usec): min=774, max=15633, avg=8120.27, stdev=1438.47 01:10:14.694 lat (usec): min=831, max=15643, avg=8173.14, stdev=1449.61 01:10:14.694 clat percentiles (usec): 01:10:14.694 | 1.00th=[ 4883], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7046], 01:10:14.694 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8356], 01:10:14.694 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[10683], 01:10:14.694 | 99.00th=[12518], 99.50th=[13042], 99.90th=[14222], 99.95th=[14746], 01:10:14.694 | 99.99th=[15533] 01:10:14.694 bw ( KiB/s): min=11704, max=32320, per=53.32%, avg=22855.27, stdev=6112.29, samples=11 01:10:14.694 iops : min= 2926, max= 8080, avg=5713.82, stdev=1528.07, samples=11 01:10:14.694 write: IOPS=6441, BW=25.2MiB/s (26.4MB/s)(135MiB/5371msec); 0 zone resets 01:10:14.694 slat (usec): min=4, max=3264, avg=64.22, stdev=159.85 01:10:14.694 clat (usec): min=606, max=15105, avg=6908.97, stdev=1183.41 01:10:14.694 lat (usec): min=666, max=15132, avg=6973.19, stdev=1189.39 01:10:14.694 clat percentiles (usec): 01:10:14.694 | 1.00th=[ 3916], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6063], 01:10:14.694 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7111], 01:10:14.694 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8291], 95.00th=[ 8848], 01:10:14.694 | 99.00th=[10290], 99.50th=[11338], 99.90th=[12780], 99.95th=[13173], 01:10:14.694 | 99.99th=[14091] 01:10:14.694 bw ( KiB/s): min=12288, max=31928, per=88.79%, avg=22880.00, stdev=5900.11, samples=11 01:10:14.694 iops : min= 3072, max= 7982, avg=5720.00, stdev=1475.03, samples=11 01:10:14.694 lat (usec) : 750=0.01%, 1000=0.01% 01:10:14.694 lat (msec) : 2=0.01%, 4=0.54%, 10=93.08%, 20=6.37% 01:10:14.694 cpu : usr=6.09%, sys=23.68%, ctx=6370, majf=0, minf=96 01:10:14.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:10:14.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:14.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:10:14.694 issued rwts: total=64361,34599,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:14.694 latency : target=0, window=0, percentile=100.00%, depth=128 01:10:14.694 01:10:14.694 Run status group 0 (all jobs): 01:10:14.694 READ: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=251MiB (264MB), run=6006-6006msec 01:10:14.694 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=135MiB (142MB), run=5371-5371msec 01:10:14.694 01:10:14.694 Disk stats (read/write): 01:10:14.694 nvme0n1: ios=63401/33823, merge=0/0, ticks=482449/217888, in_queue=700337, util=98.70% 01:10:14.694 11:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:10:14.952 11:07:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 01:10:15.211 11:07:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:10:16.147 11:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:10:16.147 11:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:16.147 11:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:10:16.148 11:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:10:16.148 11:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=91912 01:10:16.148 11:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:10:16.148 11:07:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:10:16.148 [global] 01:10:16.148 thread=1 01:10:16.148 invalidate=1 01:10:16.148 rw=randrw 01:10:16.148 time_based=1 01:10:16.148 runtime=6 01:10:16.148 ioengine=libaio 01:10:16.148 direct=1 01:10:16.148 bs=4096 01:10:16.148 iodepth=128 01:10:16.148 norandommap=0 01:10:16.148 numjobs=1 01:10:16.148 01:10:16.148 verify_dump=1 01:10:16.148 verify_backlog=512 01:10:16.148 verify_state_save=0 01:10:16.148 do_verify=1 01:10:16.148 verify=crc32c-intel 01:10:16.148 [job0] 01:10:16.148 filename=/dev/nvme0n1 01:10:16.148 Could not set queue depth (nvme0n1) 01:10:16.423 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:10:16.423 fio-3.35 01:10:16.423 Starting 1 thread 01:10:17.402 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:10:17.402 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:10:17.661 11:07:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:10:19.031 11:07:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:10:19.032 11:07:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:19.032 11:07:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:10:19.032 11:07:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:10:19.032 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:10:19.290 11:07:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:10:20.223 11:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:10:20.223 11:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:10:20.223 11:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:10:20.223 11:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 91912 01:10:22.752 01:10:22.752 job0: (groupid=0, jobs=1): err= 0: pid=91937: Mon Jul 22 11:07:27 2024 01:10:22.752 read: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(267MiB/6007msec) 01:10:22.752 slat (usec): min=4, max=5046, avg=43.50, stdev=203.74 01:10:22.752 clat (usec): min=368, max=20760, avg=7676.17, stdev=2269.66 01:10:22.752 lat (usec): min=381, max=20772, avg=7719.66, stdev=2277.44 01:10:22.752 clat percentiles (usec): 01:10:22.752 | 1.00th=[ 2147], 5.00th=[ 3556], 10.00th=[ 5014], 20.00th=[ 6390], 01:10:22.752 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7963], 01:10:22.752 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[11600], 01:10:22.752 | 99.00th=[14615], 99.50th=[15401], 99.90th=[17695], 99.95th=[19006], 01:10:22.752 | 99.99th=[20317] 01:10:22.752 bw ( KiB/s): min=11240, max=34720, per=52.76%, avg=24028.09, stdev=6422.45, samples=11 01:10:22.752 iops : min= 2810, max= 8680, avg=6007.00, stdev=1605.59, samples=11 01:10:22.752 write: IOPS=6615, BW=25.8MiB/s (27.1MB/s)(142MiB/5487msec); 0 zone resets 01:10:22.752 slat (usec): min=6, max=2803, avg=55.28, stdev=135.80 01:10:22.752 clat (usec): min=477, max=18082, avg=6545.93, stdev=2199.37 01:10:22.752 lat (usec): min=515, max=18109, avg=6601.21, stdev=2204.85 01:10:22.752 clat percentiles (usec): 01:10:22.752 | 1.00th=[ 1467], 5.00th=[ 2311], 10.00th=[ 3425], 20.00th=[ 5080], 01:10:22.752 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 7046], 01:10:22.752 | 70.00th=[ 7439], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[10421], 01:10:22.752 | 99.00th=[12518], 99.50th=[13435], 99.90th=[14877], 99.95th=[15008], 01:10:22.752 | 99.99th=[17171] 01:10:22.752 bw ( KiB/s): min=11504, max=35648, per=90.84%, avg=24037.64, stdev=6278.30, samples=11 01:10:22.752 iops : min= 2876, max= 8912, avg=6009.36, stdev=1569.54, samples=11 01:10:22.752 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.12% 01:10:22.752 lat (msec) : 2=1.52%, 4=7.01%, 10=82.14%, 20=9.13%, 50=0.01% 01:10:22.752 cpu : usr=6.04%, sys=25.52%, ctx=7321, majf=0, minf=84 01:10:22.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:10:22.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:22.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:10:22.752 issued rwts: total=68389,36297,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:22.752 latency : target=0, window=0, percentile=100.00%, depth=128 01:10:22.752 01:10:22.752 Run status group 0 (all jobs): 01:10:22.752 READ: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=267MiB (280MB), run=6007-6007msec 01:10:22.752 WRITE: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=142MiB (149MB), run=5487-5487msec 01:10:22.752 01:10:22.752 Disk stats (read/write): 01:10:22.752 nvme0n1: ios=67566/35725, merge=0/0, ticks=483517/217855, in_queue=701372, util=98.70% 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:10:22.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 01:10:22.752 11:07:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:10:22.753 11:07:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:10:22.753 11:07:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:10:22.753 11:07:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:10:22.753 11:07:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:10:22.753 11:07:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 01:10:22.753 11:07:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 01:10:23.011 11:07:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:10:23.011 11:07:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 01:10:23.011 11:07:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 01:10:23.011 11:07:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:10:23.011 rmmod nvme_tcp 01:10:23.011 rmmod nvme_fabrics 01:10:23.011 rmmod nvme_keyring 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 91618 ']' 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 91618 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 91618 ']' 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 91618 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91618 01:10:23.011 killing process with pid 91618 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91618' 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 91618 01:10:23.011 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 91618 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:10:23.270 01:10:23.270 real 0m20.556s 01:10:23.270 user 1m20.712s 01:10:23.270 sys 0m6.542s 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:23.270 ************************************ 01:10:23.270 END TEST nvmf_target_multipath 01:10:23.270 ************************************ 01:10:23.270 11:07:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:10:23.270 11:07:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:10:23.270 11:07:28 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:10:23.270 11:07:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:10:23.270 11:07:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:23.270 11:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:23.270 ************************************ 01:10:23.270 START TEST nvmf_zcopy 01:10:23.270 ************************************ 01:10:23.270 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:10:23.529 * Looking for test storage... 01:10:23.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:23.529 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:10:23.530 Cannot find device "nvmf_tgt_br" 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:10:23.530 Cannot find device "nvmf_tgt_br2" 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:10:23.530 Cannot find device "nvmf_tgt_br" 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:10:23.530 Cannot find device "nvmf_tgt_br2" 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:23.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:23.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:10:23.530 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:10:23.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:23.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 01:10:23.789 01:10:23.789 --- 10.0.0.2 ping statistics --- 01:10:23.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:23.789 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:10:23.789 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:10:23.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:23.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 01:10:23.790 01:10:23.790 --- 10.0.0.3 ping statistics --- 01:10:23.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:23.790 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:23.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:23.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:10:23.790 01:10:23.790 --- 10.0.0.1 ping statistics --- 01:10:23.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:23.790 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=92211 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 92211 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 92211 ']' 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:23.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:23.790 11:07:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:23.790 [2024-07-22 11:07:28.984325] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:23.790 [2024-07-22 11:07:28.984390] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:24.048 [2024-07-22 11:07:29.119858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:24.048 [2024-07-22 11:07:29.195550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:24.048 [2024-07-22 11:07:29.195648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:24.048 [2024-07-22 11:07:29.195658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:24.048 [2024-07-22 11:07:29.195668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:24.048 [2024-07-22 11:07:29.195675] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:24.048 [2024-07-22 11:07:29.195707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:24.985 [2024-07-22 11:07:29.984019] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.985 11:07:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:24.985 [2024-07-22 11:07:30.000186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:24.985 malloc0 01:10:24.985 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:24.986 { 01:10:24.986 "params": { 01:10:24.986 "name": "Nvme$subsystem", 01:10:24.986 "trtype": "$TEST_TRANSPORT", 01:10:24.986 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:24.986 "adrfam": "ipv4", 01:10:24.986 "trsvcid": "$NVMF_PORT", 01:10:24.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:24.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:24.986 "hdgst": ${hdgst:-false}, 01:10:24.986 "ddgst": ${ddgst:-false} 01:10:24.986 }, 01:10:24.986 "method": "bdev_nvme_attach_controller" 01:10:24.986 } 01:10:24.986 EOF 01:10:24.986 )") 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 01:10:24.986 11:07:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:10:24.986 "params": { 01:10:24.986 "name": "Nvme1", 01:10:24.986 "trtype": "tcp", 01:10:24.986 "traddr": "10.0.0.2", 01:10:24.986 "adrfam": "ipv4", 01:10:24.986 "trsvcid": "4420", 01:10:24.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:10:24.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:10:24.986 "hdgst": false, 01:10:24.986 "ddgst": false 01:10:24.986 }, 01:10:24.986 "method": "bdev_nvme_attach_controller" 01:10:24.986 }' 01:10:24.986 [2024-07-22 11:07:30.095923] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:24.986 [2024-07-22 11:07:30.096044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92262 ] 01:10:25.245 [2024-07-22 11:07:30.240133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:25.245 [2024-07-22 11:07:30.384723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:10:25.505 Running I/O for 10 seconds... 01:10:35.468 01:10:35.468 Latency(us) 01:10:35.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:35.469 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:10:35.469 Verification LBA range: start 0x0 length 0x1000 01:10:35.469 Nvme1n1 : 10.01 7559.13 59.06 0.00 0.00 16881.75 439.39 27167.65 01:10:35.469 =================================================================================================================== 01:10:35.469 Total : 7559.13 59.06 0.00 0.00 16881.75 439.39 27167.65 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=92379 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:10:35.726 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:10:35.726 { 01:10:35.726 "params": { 01:10:35.726 "name": "Nvme$subsystem", 01:10:35.726 "trtype": "$TEST_TRANSPORT", 01:10:35.726 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:35.726 "adrfam": "ipv4", 01:10:35.726 "trsvcid": "$NVMF_PORT", 01:10:35.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:35.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:35.727 "hdgst": ${hdgst:-false}, 01:10:35.727 "ddgst": ${ddgst:-false} 01:10:35.727 }, 01:10:35.727 "method": "bdev_nvme_attach_controller" 01:10:35.727 } 01:10:35.727 EOF 01:10:35.727 )") 01:10:35.727 [2024-07-22 11:07:40.877316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.727 [2024-07-22 11:07:40.877370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.727 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 01:10:35.727 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.727 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 01:10:35.727 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 01:10:35.727 11:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:10:35.727 "params": { 01:10:35.727 "name": "Nvme1", 01:10:35.727 "trtype": "tcp", 01:10:35.727 "traddr": "10.0.0.2", 01:10:35.727 "adrfam": "ipv4", 01:10:35.727 "trsvcid": "4420", 01:10:35.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:10:35.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:10:35.727 "hdgst": false, 01:10:35.727 "ddgst": false 01:10:35.727 }, 01:10:35.727 "method": "bdev_nvme_attach_controller" 01:10:35.727 }' 01:10:35.727 [2024-07-22 11:07:40.889257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.727 [2024-07-22 11:07:40.889281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.727 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.727 [2024-07-22 11:07:40.901248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.727 [2024-07-22 11:07:40.901270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.727 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.727 [2024-07-22 11:07:40.913248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.727 [2024-07-22 11:07:40.913272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.727 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.727 [2024-07-22 11:07:40.925250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.727 [2024-07-22 11:07:40.925271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.727 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.727 [2024-07-22 11:07:40.930235] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:35.727 [2024-07-22 11:07:40.930324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92379 ] 01:10:35.985 [2024-07-22 11:07:40.937252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.985 [2024-07-22 11:07:40.937272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.985 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.985 [2024-07-22 11:07:40.949255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.985 [2024-07-22 11:07:40.949276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.985 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.985 [2024-07-22 11:07:40.961259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.985 [2024-07-22 11:07:40.961280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.985 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.985 [2024-07-22 11:07:40.973260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.985 [2024-07-22 11:07:40.973280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.985 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.985 [2024-07-22 11:07:40.985262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:40.985283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:40.997267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:40.997287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.009270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.009291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.021275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.021295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.033285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.033311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.045282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.045305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.057289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.057309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 [2024-07-22 11:07:41.061234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.069305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.069331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.077288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.077311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.089306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.089333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.101304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.101330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.113303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.113328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.121548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:10:35.986 [2024-07-22 11:07:41.125303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.125324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.137313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.137345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.149313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.149344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.157299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.157322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.169315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.169343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:35.986 [2024-07-22 11:07:41.181315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:35.986 [2024-07-22 11:07:41.181341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:35.986 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.193319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.193347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.205318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.205343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.217323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.217349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.225315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.225341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.237322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.237345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.249360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.249390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.261339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.261363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.273340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.273363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.285345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.285369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.297341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.297365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.309362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.309387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 Running I/O for 5 seconds... 01:10:36.245 [2024-07-22 11:07:41.321350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.321372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.338304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.338333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.355479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.355506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.371448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.371474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.388443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.388469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.404390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.404418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.245 [2024-07-22 11:07:41.419752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.245 [2024-07-22 11:07:41.419779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.245 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.246 [2024-07-22 11:07:41.434005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.246 [2024-07-22 11:07:41.434031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.246 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.246 [2024-07-22 11:07:41.448372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.246 [2024-07-22 11:07:41.448400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.246 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.460457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.460483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.476416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.476442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.492072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.492098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.505761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.505787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.521551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.521578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.537745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.537771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.554523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.554549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.569795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.569821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.584365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.584392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.600652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.600678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.616612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.616638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.631724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.631750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.648054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.648092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.659334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.659360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.673815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.673841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.690583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.690609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.504 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.504 [2024-07-22 11:07:41.706843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.504 [2024-07-22 11:07:41.706870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.763 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.763 [2024-07-22 11:07:41.722928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.763 [2024-07-22 11:07:41.722954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.763 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.763 [2024-07-22 11:07:41.734664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.763 [2024-07-22 11:07:41.734691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.763 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.763 [2024-07-22 11:07:41.742341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.763 [2024-07-22 11:07:41.742366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.757033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.757059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.765194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.765220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.780151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.780177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.795862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.795888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.809403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.809430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.824785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.824811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.840793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.840819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.852772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.852797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.868667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.868694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.884860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.884887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.901307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.901332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.917909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.917935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.933393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.933420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.947696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.947721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:36.764 [2024-07-22 11:07:41.959105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:36.764 [2024-07-22 11:07:41.959131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:36.764 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.030 [2024-07-22 11:07:41.975288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.030 [2024-07-22 11:07:41.975314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.030 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.030 [2024-07-22 11:07:41.990072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.030 [2024-07-22 11:07:41.990098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.030 2024/07/22 11:07:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.030 [2024-07-22 11:07:42.004888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.030 [2024-07-22 11:07:42.004916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.030 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.020720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.020747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.032011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.032037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.047849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.047876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.063087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.063113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.079225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.079252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.089876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.089902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.105263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.105289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.121120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.121146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.132265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.132291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.147293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.147320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.163499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.163525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.175203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.175228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.190289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.190315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.206513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.206539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.031 [2024-07-22 11:07:42.223080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.031 [2024-07-22 11:07:42.223106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.031 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.299 [2024-07-22 11:07:42.238907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.238933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.254180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.254206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.270405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.270431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.286493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.286520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.298078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.298105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.313893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.313920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.329487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.329514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.337525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.337551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.346969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.346994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.355070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.355096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.369987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.370013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.384368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.384393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.395627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.395653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.405587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.405613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.420817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.420843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.436252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.436278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.445587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.445613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.461403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.461429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.477095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.477121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.300 [2024-07-22 11:07:42.491943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.300 [2024-07-22 11:07:42.491979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.300 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.508545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.508571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.524354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.524380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.536151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.536177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.551643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.551668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.567749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.567774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.583450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.583475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.599249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.599275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.614302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.614328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.629114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.629140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.644445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.644470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.658790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.658816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.670376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.670403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.686396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.686422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.701577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.701603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.711333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.711360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.726214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.726245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.742739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.742764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.759647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.759673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.775134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.775159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.590 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.590 [2024-07-22 11:07:42.789740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.590 [2024-07-22 11:07:42.789766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.805658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.805684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.820088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.820114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.836670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.836696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.853337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.853363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.868972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.868997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.883654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.883680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.899649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.899677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.910784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.910810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.926558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.926615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.942726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.942752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.959786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.959813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.975850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.975877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:42.991586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:42.991618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:43.006046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:43.006073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:43.017695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:43.017721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:43.032795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:43.032821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:37.852 [2024-07-22 11:07:43.043810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:37.852 [2024-07-22 11:07:43.043837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:37.852 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.111 [2024-07-22 11:07:43.059639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.111 [2024-07-22 11:07:43.059665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.111 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.111 [2024-07-22 11:07:43.075223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.111 [2024-07-22 11:07:43.075248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.111 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.111 [2024-07-22 11:07:43.089713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.111 [2024-07-22 11:07:43.089740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.111 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.111 [2024-07-22 11:07:43.105619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.111 [2024-07-22 11:07:43.105646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.111 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.111 [2024-07-22 11:07:43.120942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.111 [2024-07-22 11:07:43.120977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.111 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.111 [2024-07-22 11:07:43.136338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.111 [2024-07-22 11:07:43.136364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.150645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.150671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.162128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.162154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.177478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.177504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.188777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.188803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.204365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.204391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.220239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.220265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.231900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.231926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.247830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.247856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.263681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.263708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.274939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.274974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.290730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.290757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.112 [2024-07-22 11:07:43.305569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.112 [2024-07-22 11:07:43.305594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.112 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.370 [2024-07-22 11:07:43.320117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.370 [2024-07-22 11:07:43.320145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.370 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.370 [2024-07-22 11:07:43.336569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.370 [2024-07-22 11:07:43.336595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.370 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.370 [2024-07-22 11:07:43.352728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.370 [2024-07-22 11:07:43.352753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.370 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.370 [2024-07-22 11:07:43.368048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.370 [2024-07-22 11:07:43.368073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.383035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.383060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.399923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.399948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.415861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.415902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.427612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.427638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.442711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.442737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.457430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.457456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.473028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.473055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.488350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.488379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.504072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.504098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.519477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.519503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.534728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.534749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.550380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.550406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.371 [2024-07-22 11:07:43.565448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.371 [2024-07-22 11:07:43.565474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.371 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.582015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.582040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.597819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.597845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.608908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.608934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.624613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.624641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.639739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.639765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.649056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.649083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.664582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.664608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.675345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.675371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.684525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.684552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.700553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.700579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.716069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.716094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.732235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.732262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.748196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.748223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.762745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.762770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.773976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.774001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.630 [2024-07-22 11:07:43.789861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.630 [2024-07-22 11:07:43.789887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.630 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.631 [2024-07-22 11:07:43.805046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.631 [2024-07-22 11:07:43.805071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.631 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.631 [2024-07-22 11:07:43.819655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.631 [2024-07-22 11:07:43.819680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.631 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.631 [2024-07-22 11:07:43.835928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.631 [2024-07-22 11:07:43.835966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.847533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.847559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.862763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.862789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.878426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.878452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.893536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.893563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.909069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.909095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.924414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.924440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.940721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.940748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.957494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.957520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.973239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.973266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:43.989176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:43.989202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:44.004243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:44.004269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.889 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.889 [2024-07-22 11:07:44.020075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.889 [2024-07-22 11:07:44.020100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.890 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.890 [2024-07-22 11:07:44.031263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.890 [2024-07-22 11:07:44.031289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.890 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.890 [2024-07-22 11:07:44.046611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.890 [2024-07-22 11:07:44.046638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.890 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.890 [2024-07-22 11:07:44.062810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.890 [2024-07-22 11:07:44.062836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.890 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.890 [2024-07-22 11:07:44.078106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.890 [2024-07-22 11:07:44.078131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:38.890 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:38.890 [2024-07-22 11:07:44.092897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:38.890 [2024-07-22 11:07:44.092923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.148 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.148 [2024-07-22 11:07:44.108110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.148 [2024-07-22 11:07:44.108135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.148 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.148 [2024-07-22 11:07:44.123175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.148 [2024-07-22 11:07:44.123201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.148 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.148 [2024-07-22 11:07:44.139139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.148 [2024-07-22 11:07:44.139165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.148 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.148 [2024-07-22 11:07:44.155371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.148 [2024-07-22 11:07:44.155398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.148 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.148 [2024-07-22 11:07:44.166404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.148 [2024-07-22 11:07:44.166430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.148 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.148 [2024-07-22 11:07:44.182155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.182181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.198018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.198043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.214179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.214205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.225655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.225681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.241411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.241437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.257049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.257076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.268348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.268375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.284428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.284454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.299972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.299996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.316451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.316478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.332675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.332702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.149 [2024-07-22 11:07:44.349547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.149 [2024-07-22 11:07:44.349573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.149 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.408 [2024-07-22 11:07:44.365590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.408 [2024-07-22 11:07:44.365616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.408 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.408 [2024-07-22 11:07:44.381844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.408 [2024-07-22 11:07:44.381870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.408 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.408 [2024-07-22 11:07:44.398317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.408 [2024-07-22 11:07:44.398344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.408 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.408 [2024-07-22 11:07:44.413581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.408 [2024-07-22 11:07:44.413607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.408 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.408 [2024-07-22 11:07:44.428216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.408 [2024-07-22 11:07:44.428242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.408 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.408 [2024-07-22 11:07:44.439652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.408 [2024-07-22 11:07:44.439678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.455178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.455204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.471695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.471722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.487699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.487725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.499189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.499214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.514789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.514815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.530709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.530735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.546062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.546087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.562581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.562609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.578032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.578058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.593783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.593810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.409 [2024-07-22 11:07:44.609245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.409 [2024-07-22 11:07:44.609270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.409 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.668 [2024-07-22 11:07:44.620306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.668 [2024-07-22 11:07:44.620331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.668 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.668 [2024-07-22 11:07:44.636241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.668 [2024-07-22 11:07:44.636268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.668 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.668 [2024-07-22 11:07:44.651476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.668 [2024-07-22 11:07:44.651503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.668 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.668 [2024-07-22 11:07:44.665127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.668 [2024-07-22 11:07:44.665154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.673189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.673215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.688676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.688703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.705012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.705037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.720997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.721023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.737177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.737203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.748507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.748533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.764035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.764060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.779239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.779265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.794146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.794172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.809469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.809495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.822742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.822769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.838213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.838240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.854172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.854198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.669 [2024-07-22 11:07:44.865198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.669 [2024-07-22 11:07:44.865224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.669 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.881169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.881196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.896998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.897024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.911909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.911935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.928414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.928441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.944760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.944787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.961155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.961184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.977798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.977826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:44.993291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:44.993317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.008904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.008930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.026021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.026055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.041320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.041346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.056521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.056548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.072057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.072083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.086868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.086894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.102392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.102418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.117620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.928 [2024-07-22 11:07:45.117646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.928 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:39.928 [2024-07-22 11:07:45.133702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:39.929 [2024-07-22 11:07:45.133741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:39.929 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.144035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.144061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.159714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.159741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.176226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.176252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.192415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.192443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.208814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.208841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.225870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.225910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.241947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.241988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.258220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.258247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.270420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.270458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.286033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.286059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.302649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.302676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.318931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.318970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.335083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.335110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.347620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.347647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.364333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.364361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.188 [2024-07-22 11:07:45.381054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.188 [2024-07-22 11:07:45.381081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.188 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.396928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.396968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.408496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.408522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.424433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.424460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.439974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.439999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.451263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.451289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.467582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.467608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.482413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.482440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.497290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.497318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.512915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.512941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.527888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.527915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.543760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.543786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.559999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.560024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.447 [2024-07-22 11:07:45.571853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.447 [2024-07-22 11:07:45.571879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.447 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.448 [2024-07-22 11:07:45.586977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.448 [2024-07-22 11:07:45.587002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.448 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.448 [2024-07-22 11:07:45.603497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.448 [2024-07-22 11:07:45.603524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.448 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.448 [2024-07-22 11:07:45.619539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.448 [2024-07-22 11:07:45.619574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.448 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.448 [2024-07-22 11:07:45.635190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.448 [2024-07-22 11:07:45.635216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.448 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.448 [2024-07-22 11:07:45.650033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.448 [2024-07-22 11:07:45.650060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.665674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.665701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.677194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.677220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.692758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.692784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.707898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.707924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.722872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.722898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.738649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.738675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.749881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.749907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.765432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.765458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.780723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.780749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.707 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.707 [2024-07-22 11:07:45.795136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.707 [2024-07-22 11:07:45.795162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.811363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.811390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.823072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.823097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.838435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.838461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.854527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.854553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.866455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.866482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.882050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.882075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.897949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.897987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.708 [2024-07-22 11:07:45.909882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.708 [2024-07-22 11:07:45.909908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.708 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.966 [2024-07-22 11:07:45.926839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.966 [2024-07-22 11:07:45.926867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.966 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.966 [2024-07-22 11:07:45.942060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.966 [2024-07-22 11:07:45.942086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:45.954420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:45.954459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:45.970215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:45.970241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:45.986989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:45.987015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.003122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.003148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.020382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.020411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.036971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.036998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.054457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.054484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.069883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.069909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.086909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.086937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.102610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.102637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.114585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.114613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.130629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.130657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.147002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.147029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:40.967 [2024-07-22 11:07:46.163074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:40.967 [2024-07-22 11:07:46.163100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:40.967 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.180236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.180263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.192029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.192055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.203193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.203220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.219595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.219623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.236123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.236151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.252191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.252218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.264020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.264046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.280905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.280932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.296478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.296505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.312833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.312858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.327254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.327280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 01:10:41.226 Latency(us) 01:10:41.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:41.226 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:10:41.226 Nvme1n1 : 5.01 15029.75 117.42 0.00 0.00 8505.42 3753.43 17992.61 01:10:41.226 =================================================================================================================== 01:10:41.226 Total : 15029.75 117.42 0.00 0.00 8505.42 3753.43 17992.61 01:10:41.226 [2024-07-22 11:07:46.337199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.337224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.226 [2024-07-22 11:07:46.349196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.226 [2024-07-22 11:07:46.349222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.226 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.227 [2024-07-22 11:07:46.361208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.227 [2024-07-22 11:07:46.361239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.227 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.227 [2024-07-22 11:07:46.373217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.227 [2024-07-22 11:07:46.373247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.227 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.227 [2024-07-22 11:07:46.385224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.227 [2024-07-22 11:07:46.385259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.227 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.227 [2024-07-22 11:07:46.397229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.227 [2024-07-22 11:07:46.397264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.227 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.227 [2024-07-22 11:07:46.409245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.227 [2024-07-22 11:07:46.409280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.227 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.227 [2024-07-22 11:07:46.417216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.227 [2024-07-22 11:07:46.417243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.227 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.227 [2024-07-22 11:07:46.429233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.227 [2024-07-22 11:07:46.429266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.485 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.485 [2024-07-22 11:07:46.441249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.485 [2024-07-22 11:07:46.441288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.453248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.453292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.465244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.465279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.477242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.477273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.489250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.489280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.501253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.501289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.513245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.513273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.525237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.525258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.537276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.537332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.549262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.549295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.561249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.561275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.573289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.573332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.585250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.585275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 [2024-07-22 11:07:46.597251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:10:41.486 [2024-07-22 11:07:46.597290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:41.486 2024/07/22 11:07:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:41.486 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (92379) - No such process 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 92379 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:41.486 delay0 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:41.486 11:07:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 01:10:41.744 [2024-07-22 11:07:46.790710] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:10:48.300 Initializing NVMe Controllers 01:10:48.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:10:48.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:10:48.300 Initialization complete. Launching workers. 01:10:48.300 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 83 01:10:48.300 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 370, failed to submit 33 01:10:48.300 success 197, unsuccess 173, failed 0 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:10:48.300 rmmod nvme_tcp 01:10:48.300 rmmod nvme_fabrics 01:10:48.300 rmmod nvme_keyring 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 92211 ']' 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 92211 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 92211 ']' 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 92211 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92211 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:10:48.300 killing process with pid 92211 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92211' 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 92211 01:10:48.300 11:07:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 92211 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:10:48.300 01:10:48.300 real 0m24.866s 01:10:48.300 user 0m39.526s 01:10:48.300 sys 0m7.328s 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:48.300 ************************************ 01:10:48.300 END TEST nvmf_zcopy 01:10:48.300 ************************************ 01:10:48.300 11:07:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:10:48.300 11:07:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:10:48.300 11:07:53 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:10:48.300 11:07:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:10:48.300 11:07:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:48.300 11:07:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:48.300 ************************************ 01:10:48.300 START TEST nvmf_nmic 01:10:48.300 ************************************ 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:10:48.300 * Looking for test storage... 01:10:48.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:48.300 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:48.301 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:10:48.558 Cannot find device "nvmf_tgt_br" 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:10:48.558 Cannot find device "nvmf_tgt_br2" 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:10:48.558 Cannot find device "nvmf_tgt_br" 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:10:48.558 Cannot find device "nvmf_tgt_br2" 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:10:48.558 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:48.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:48.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:10:48.559 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:10:48.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:48.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 01:10:48.817 01:10:48.817 --- 10.0.0.2 ping statistics --- 01:10:48.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:48.817 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:10:48.817 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:48.817 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 01:10:48.817 01:10:48.817 --- 10.0.0.3 ping statistics --- 01:10:48.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:48.817 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:48.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:48.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 01:10:48.817 01:10:48.817 --- 10.0.0.1 ping statistics --- 01:10:48.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:48.817 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=92699 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 92699 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 92699 ']' 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:48.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:48.817 11:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:10:48.817 [2024-07-22 11:07:53.955011] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:48.817 [2024-07-22 11:07:53.955112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:49.075 [2024-07-22 11:07:54.101647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:10:49.075 [2024-07-22 11:07:54.206293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:49.075 [2024-07-22 11:07:54.206383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:49.075 [2024-07-22 11:07:54.206410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:49.075 [2024-07-22 11:07:54.206420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:49.075 [2024-07-22 11:07:54.206429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:49.075 [2024-07-22 11:07:54.206592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:10:49.075 [2024-07-22 11:07:54.207180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:10:49.075 [2024-07-22 11:07:54.207262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:10:49.075 [2024-07-22 11:07:54.207268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 [2024-07-22 11:07:55.051781] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 Malloc0 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 [2024-07-22 11:07:55.127349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 test case1: single bdev can't be used in multiple subsystems 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 [2024-07-22 11:07:55.151053] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:10:50.009 [2024-07-22 11:07:55.151092] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:10:50.009 [2024-07-22 11:07:55.151105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:10:50.009 2024/07/22 11:07:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:10:50.009 request: 01:10:50.009 { 01:10:50.009 "method": "nvmf_subsystem_add_ns", 01:10:50.009 "params": { 01:10:50.009 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:10:50.009 "namespace": { 01:10:50.009 "bdev_name": "Malloc0", 01:10:50.009 "no_auto_visible": false 01:10:50.009 } 01:10:50.009 } 01:10:50.009 } 01:10:50.009 Got JSON-RPC error response 01:10:50.009 GoRPCClient: error on JSON-RPC call 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:10:50.009 Adding namespace failed - expected result. 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:10:50.009 test case2: host connect to nvmf target in multiple paths 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:50.009 [2024-07-22 11:07:55.163166] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:10:50.009 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:10:50.266 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 01:10:50.524 11:07:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:10:50.524 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 01:10:50.524 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:10:50.524 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:10:50.524 11:07:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 01:10:52.421 11:07:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:10:52.421 11:07:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:10:52.421 11:07:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:10:52.421 11:07:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:10:52.422 11:07:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:10:52.422 11:07:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 01:10:52.422 11:07:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:10:52.422 [global] 01:10:52.422 thread=1 01:10:52.422 invalidate=1 01:10:52.422 rw=write 01:10:52.422 time_based=1 01:10:52.422 runtime=1 01:10:52.422 ioengine=libaio 01:10:52.422 direct=1 01:10:52.422 bs=4096 01:10:52.422 iodepth=1 01:10:52.422 norandommap=0 01:10:52.422 numjobs=1 01:10:52.422 01:10:52.422 verify_dump=1 01:10:52.422 verify_backlog=512 01:10:52.422 verify_state_save=0 01:10:52.422 do_verify=1 01:10:52.422 verify=crc32c-intel 01:10:52.422 [job0] 01:10:52.422 filename=/dev/nvme0n1 01:10:52.422 Could not set queue depth (nvme0n1) 01:10:52.682 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:10:52.682 fio-3.35 01:10:52.682 Starting 1 thread 01:10:54.052 01:10:54.052 job0: (groupid=0, jobs=1): err= 0: pid=92813: Mon Jul 22 11:07:58 2024 01:10:54.052 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 01:10:54.052 slat (nsec): min=15186, max=96929, avg=22658.40, stdev=10808.26 01:10:54.052 clat (usec): min=125, max=793, avg=192.54, stdev=36.41 01:10:54.052 lat (usec): min=147, max=811, avg=215.20, stdev=36.53 01:10:54.052 clat percentiles (usec): 01:10:54.052 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 01:10:54.052 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 190], 60.00th=[ 200], 01:10:54.053 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 249], 01:10:54.053 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 429], 99.95th=[ 693], 01:10:54.053 | 99.99th=[ 791] 01:10:54.053 write: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec); 0 zone resets 01:10:54.053 slat (usec): min=22, max=153, avg=34.03, stdev=14.02 01:10:54.053 clat (usec): min=85, max=259, avg=134.13, stdev=28.94 01:10:54.053 lat (usec): min=113, max=325, avg=168.16, stdev=31.54 01:10:54.053 clat percentiles (usec): 01:10:54.053 | 1.00th=[ 93], 5.00th=[ 97], 10.00th=[ 100], 20.00th=[ 108], 01:10:54.053 | 30.00th=[ 115], 40.00th=[ 122], 50.00th=[ 131], 60.00th=[ 139], 01:10:54.053 | 70.00th=[ 147], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 188], 01:10:54.053 | 99.00th=[ 212], 99.50th=[ 225], 99.90th=[ 249], 99.95th=[ 255], 01:10:54.053 | 99.99th=[ 260] 01:10:54.053 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 01:10:54.053 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:10:54.053 lat (usec) : 100=4.98%, 250=92.65%, 500=2.33%, 750=0.02%, 1000=0.02% 01:10:54.053 cpu : usr=2.40%, sys=10.40%, ctx=5159, majf=0, minf=2 01:10:54.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:10:54.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:54.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:10:54.053 issued rwts: total=2560,2599,0,0 short=0,0,0,0 dropped=0,0,0,0 01:10:54.053 latency : target=0, window=0, percentile=100.00%, depth=1 01:10:54.053 01:10:54.053 Run status group 0 (all jobs): 01:10:54.053 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 01:10:54.053 WRITE: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=10.2MiB (10.6MB), run=1001-1001msec 01:10:54.053 01:10:54.053 Disk stats (read/write): 01:10:54.053 nvme0n1: ios=2167/2560, merge=0/0, ticks=445/385, in_queue=830, util=91.48% 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:10:54.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 01:10:54.053 11:07:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:10:54.053 rmmod nvme_tcp 01:10:54.053 rmmod nvme_fabrics 01:10:54.053 rmmod nvme_keyring 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 92699 ']' 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 92699 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 92699 ']' 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 92699 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92699 01:10:54.053 killing process with pid 92699 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92699' 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 92699 01:10:54.053 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 92699 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:54.311 11:07:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:10:54.570 01:10:54.570 real 0m6.128s 01:10:54.570 user 0m20.424s 01:10:54.570 sys 0m1.452s 01:10:54.570 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 01:10:54.570 11:07:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:10:54.570 ************************************ 01:10:54.570 END TEST nvmf_nmic 01:10:54.570 ************************************ 01:10:54.570 11:07:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:10:54.570 11:07:59 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:10:54.570 11:07:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:10:54.570 11:07:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:10:54.570 11:07:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:10:54.570 ************************************ 01:10:54.570 START TEST nvmf_fio_target 01:10:54.570 ************************************ 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:10:54.570 * Looking for test storage... 01:10:54.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.570 11:07:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:10:54.571 Cannot find device "nvmf_tgt_br" 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:10:54.571 Cannot find device "nvmf_tgt_br2" 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:10:54.571 Cannot find device "nvmf_tgt_br" 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:10:54.571 Cannot find device "nvmf_tgt_br2" 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 01:10:54.571 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:54.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:54.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:10:54.829 11:07:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:10:54.829 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:54.829 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:10:55.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:55.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 01:10:55.087 01:10:55.087 --- 10.0.0.2 ping statistics --- 01:10:55.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:55.087 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:10:55.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:55.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 01:10:55.087 01:10:55.087 --- 10.0.0.3 ping statistics --- 01:10:55.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:55.087 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:55.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:55.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 01:10:55.087 01:10:55.087 --- 10.0.0.1 ping statistics --- 01:10:55.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:55.087 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=92992 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 92992 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 92992 ']' 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:10:55.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:10:55.087 11:08:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:10:55.087 [2024-07-22 11:08:00.149385] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:10:55.087 [2024-07-22 11:08:00.149481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:55.087 [2024-07-22 11:08:00.289789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:10:55.345 [2024-07-22 11:08:00.385846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:55.345 [2024-07-22 11:08:00.385928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:55.345 [2024-07-22 11:08:00.385943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:55.345 [2024-07-22 11:08:00.385955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:55.345 [2024-07-22 11:08:00.385978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:55.345 [2024-07-22 11:08:00.386494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:10:55.345 [2024-07-22 11:08:00.386711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:10:55.345 [2024-07-22 11:08:00.386857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:10:55.345 [2024-07-22 11:08:00.386880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:10:56.293 11:08:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:10:56.293 11:08:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 01:10:56.293 11:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:10:56.293 11:08:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 01:10:56.293 11:08:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:10:56.293 11:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:56.293 11:08:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:10:56.565 [2024-07-22 11:08:01.496349] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:56.565 11:08:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:56.822 11:08:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:10:56.822 11:08:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:57.079 11:08:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:10:57.079 11:08:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:57.643 11:08:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:10:57.643 11:08:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:57.900 11:08:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:10:57.900 11:08:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:10:58.157 11:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:58.415 11:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:10:58.415 11:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:58.672 11:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:10:58.672 11:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:59.236 11:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:10:59.236 11:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:10:59.493 11:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:10:59.750 11:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:10:59.750 11:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:11:00.007 11:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:11:00.007 11:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:11:00.264 11:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:11:00.521 [2024-07-22 11:08:05.495635] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:00.521 11:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:11:00.778 11:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:11:01.034 11:08:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:11:01.292 11:08:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:11:01.292 11:08:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 01:11:01.292 11:08:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:11:01.292 11:08:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 01:11:01.292 11:08:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 01:11:01.292 11:08:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 01:11:03.191 11:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:11:03.191 11:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:11:03.191 11:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:11:03.191 11:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 01:11:03.191 11:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:11:03.191 11:08:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 01:11:03.191 11:08:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:11:03.191 [global] 01:11:03.191 thread=1 01:11:03.191 invalidate=1 01:11:03.191 rw=write 01:11:03.191 time_based=1 01:11:03.191 runtime=1 01:11:03.191 ioengine=libaio 01:11:03.191 direct=1 01:11:03.191 bs=4096 01:11:03.191 iodepth=1 01:11:03.191 norandommap=0 01:11:03.191 numjobs=1 01:11:03.191 01:11:03.191 verify_dump=1 01:11:03.191 verify_backlog=512 01:11:03.191 verify_state_save=0 01:11:03.191 do_verify=1 01:11:03.191 verify=crc32c-intel 01:11:03.191 [job0] 01:11:03.191 filename=/dev/nvme0n1 01:11:03.191 [job1] 01:11:03.191 filename=/dev/nvme0n2 01:11:03.191 [job2] 01:11:03.191 filename=/dev/nvme0n3 01:11:03.191 [job3] 01:11:03.191 filename=/dev/nvme0n4 01:11:03.450 Could not set queue depth (nvme0n1) 01:11:03.450 Could not set queue depth (nvme0n2) 01:11:03.450 Could not set queue depth (nvme0n3) 01:11:03.450 Could not set queue depth (nvme0n4) 01:11:03.450 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:03.450 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:03.450 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:03.450 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:03.450 fio-3.35 01:11:03.450 Starting 4 threads 01:11:04.836 01:11:04.836 job0: (groupid=0, jobs=1): err= 0: pid=93294: Mon Jul 22 11:08:09 2024 01:11:04.836 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:11:04.836 slat (nsec): min=17313, max=74284, avg=25636.85, stdev=7957.73 01:11:04.836 clat (usec): min=223, max=918, avg=319.19, stdev=47.72 01:11:04.836 lat (usec): min=244, max=945, avg=344.83, stdev=48.07 01:11:04.836 clat percentiles (usec): 01:11:04.836 | 1.00th=[ 241], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 281], 01:11:04.836 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 326], 01:11:04.836 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 388], 01:11:04.837 | 99.00th=[ 424], 99.50th=[ 469], 99.90th=[ 742], 99.95th=[ 922], 01:11:04.837 | 99.99th=[ 922] 01:11:04.837 write: IOPS=1535, BW=6142KiB/s (6289kB/s)(6148KiB/1001msec); 0 zone resets 01:11:04.837 slat (usec): min=27, max=130, avg=39.56, stdev=10.14 01:11:04.837 clat (usec): min=174, max=886, avg=260.24, stdev=44.18 01:11:04.837 lat (usec): min=210, max=922, avg=299.81, stdev=44.30 01:11:04.837 clat percentiles (usec): 01:11:04.837 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 229], 01:11:04.837 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 265], 01:11:04.837 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 330], 01:11:04.837 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 562], 99.95th=[ 889], 01:11:04.837 | 99.99th=[ 889] 01:11:04.837 bw ( KiB/s): min= 8175, max= 8175, per=29.32%, avg=8175.00, stdev= 0.00, samples=1 01:11:04.837 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 01:11:04.837 lat (usec) : 250=23.46%, 500=76.18%, 750=0.29%, 1000=0.07% 01:11:04.837 cpu : usr=1.50%, sys=7.80%, ctx=3075, majf=0, minf=13 01:11:04.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:04.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 issued rwts: total=1536,1537,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:04.837 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:04.837 job1: (groupid=0, jobs=1): err= 0: pid=93295: Mon Jul 22 11:08:09 2024 01:11:04.837 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 01:11:04.837 slat (nsec): min=17920, max=79916, avg=26063.17, stdev=9166.77 01:11:04.837 clat (usec): min=132, max=649, avg=216.94, stdev=43.04 01:11:04.837 lat (usec): min=152, max=674, avg=243.00, stdev=43.16 01:11:04.837 clat percentiles (usec): 01:11:04.837 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 165], 20.00th=[ 180], 01:11:04.837 | 30.00th=[ 192], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 223], 01:11:04.837 | 70.00th=[ 237], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 293], 01:11:04.837 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 359], 99.95th=[ 363], 01:11:04.837 | 99.99th=[ 652] 01:11:04.837 write: IOPS=2366, BW=9467KiB/s (9694kB/s)(9476KiB/1001msec); 0 zone resets 01:11:04.837 slat (usec): min=24, max=127, avg=36.91, stdev=12.18 01:11:04.837 clat (usec): min=88, max=328, avg=169.65, stdev=39.57 01:11:04.837 lat (usec): min=124, max=363, avg=206.56, stdev=41.13 01:11:04.837 clat percentiles (usec): 01:11:04.837 | 1.00th=[ 105], 5.00th=[ 115], 10.00th=[ 123], 20.00th=[ 135], 01:11:04.837 | 30.00th=[ 145], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 176], 01:11:04.837 | 70.00th=[ 186], 80.00th=[ 202], 90.00th=[ 223], 95.00th=[ 243], 01:11:04.837 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 322], 99.95th=[ 326], 01:11:04.837 | 99.99th=[ 330] 01:11:04.837 bw ( KiB/s): min= 9013, max= 9013, per=32.32%, avg=9013.00, stdev= 0.00, samples=1 01:11:04.837 iops : min= 2253, max= 2253, avg=2253.00, stdev= 0.00, samples=1 01:11:04.837 lat (usec) : 100=0.14%, 250=88.02%, 500=11.82%, 750=0.02% 01:11:04.837 cpu : usr=2.50%, sys=10.60%, ctx=4426, majf=0, minf=7 01:11:04.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:04.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 issued rwts: total=2048,2369,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:04.837 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:04.837 job2: (groupid=0, jobs=1): err= 0: pid=93296: Mon Jul 22 11:08:09 2024 01:11:04.837 read: IOPS=1093, BW=4376KiB/s (4481kB/s)(4380KiB/1001msec) 01:11:04.837 slat (usec): min=13, max=108, avg=20.73, stdev= 9.20 01:11:04.837 clat (usec): min=227, max=1950, avg=407.53, stdev=69.48 01:11:04.837 lat (usec): min=266, max=1978, avg=428.26, stdev=69.86 01:11:04.837 clat percentiles (usec): 01:11:04.837 | 1.00th=[ 297], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 367], 01:11:04.837 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 416], 01:11:04.837 | 70.00th=[ 429], 80.00th=[ 449], 90.00th=[ 478], 95.00th=[ 494], 01:11:04.837 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 709], 99.95th=[ 1958], 01:11:04.837 | 99.99th=[ 1958] 01:11:04.837 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:11:04.837 slat (usec): min=17, max=138, avg=34.63, stdev=12.68 01:11:04.837 clat (usec): min=137, max=525, avg=306.55, stdev=46.66 01:11:04.837 lat (usec): min=185, max=555, avg=341.18, stdev=46.32 01:11:04.837 clat percentiles (usec): 01:11:04.837 | 1.00th=[ 215], 5.00th=[ 239], 10.00th=[ 253], 20.00th=[ 269], 01:11:04.837 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 01:11:04.837 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 388], 01:11:04.837 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 515], 99.95th=[ 529], 01:11:04.837 | 99.99th=[ 529] 01:11:04.837 bw ( KiB/s): min= 6475, max= 6475, per=23.22%, avg=6475.00, stdev= 0.00, samples=1 01:11:04.837 iops : min= 1618, max= 1618, avg=1618.00, stdev= 0.00, samples=1 01:11:04.837 lat (usec) : 250=5.17%, 500=92.97%, 750=1.82% 01:11:04.837 lat (msec) : 2=0.04% 01:11:04.837 cpu : usr=1.30%, sys=5.80%, ctx=2633, majf=0, minf=7 01:11:04.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:04.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 issued rwts: total=1095,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:04.837 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:04.837 job3: (groupid=0, jobs=1): err= 0: pid=93297: Mon Jul 22 11:08:09 2024 01:11:04.837 read: IOPS=1094, BW=4380KiB/s (4485kB/s)(4384KiB/1001msec) 01:11:04.837 slat (usec): min=13, max=126, avg=23.24, stdev= 9.84 01:11:04.837 clat (usec): min=181, max=1937, avg=405.26, stdev=70.50 01:11:04.837 lat (usec): min=218, max=1953, avg=428.50, stdev=71.12 01:11:04.837 clat percentiles (usec): 01:11:04.837 | 1.00th=[ 302], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 359], 01:11:04.837 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 412], 01:11:04.837 | 70.00th=[ 429], 80.00th=[ 449], 90.00th=[ 478], 95.00th=[ 502], 01:11:04.837 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 709], 99.95th=[ 1942], 01:11:04.837 | 99.99th=[ 1942] 01:11:04.837 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:11:04.837 slat (usec): min=13, max=134, avg=34.51, stdev=12.74 01:11:04.837 clat (usec): min=149, max=511, avg=306.80, stdev=45.85 01:11:04.837 lat (usec): min=198, max=535, avg=341.31, stdev=45.31 01:11:04.837 clat percentiles (usec): 01:11:04.837 | 1.00th=[ 217], 5.00th=[ 237], 10.00th=[ 249], 20.00th=[ 269], 01:11:04.837 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 01:11:04.837 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 388], 01:11:04.837 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 478], 99.95th=[ 510], 01:11:04.837 | 99.99th=[ 510] 01:11:04.837 bw ( KiB/s): min= 6488, max= 6488, per=23.27%, avg=6488.00, stdev= 0.00, samples=1 01:11:04.837 iops : min= 1622, max= 1622, avg=1622.00, stdev= 0.00, samples=1 01:11:04.837 lat (usec) : 250=6.16%, 500=91.68%, 750=2.13% 01:11:04.837 lat (msec) : 2=0.04% 01:11:04.837 cpu : usr=1.00%, sys=6.00%, ctx=2634, majf=0, minf=8 01:11:04.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:04.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:04.837 issued rwts: total=1096,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:04.837 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:04.837 01:11:04.837 Run status group 0 (all jobs): 01:11:04.837 READ: bw=22.5MiB/s (23.6MB/s), 4376KiB/s-8184KiB/s (4481kB/s-8380kB/s), io=22.6MiB (23.7MB), run=1001-1001msec 01:11:04.837 WRITE: bw=27.2MiB/s (28.6MB/s), 6138KiB/s-9467KiB/s (6285kB/s-9694kB/s), io=27.3MiB (28.6MB), run=1001-1001msec 01:11:04.837 01:11:04.837 Disk stats (read/write): 01:11:04.837 nvme0n1: ios=1200/1536, merge=0/0, ticks=405/437, in_queue=842, util=88.57% 01:11:04.837 nvme0n2: ios=1795/2048, merge=0/0, ticks=439/407, in_queue=846, util=89.67% 01:11:04.837 nvme0n3: ios=1030/1227, merge=0/0, ticks=414/398, in_queue=812, util=89.39% 01:11:04.837 nvme0n4: ios=1051/1226, merge=0/0, ticks=462/407, in_queue=869, util=90.17% 01:11:04.837 11:08:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:11:04.837 [global] 01:11:04.837 thread=1 01:11:04.837 invalidate=1 01:11:04.837 rw=randwrite 01:11:04.837 time_based=1 01:11:04.837 runtime=1 01:11:04.837 ioengine=libaio 01:11:04.837 direct=1 01:11:04.837 bs=4096 01:11:04.837 iodepth=1 01:11:04.837 norandommap=0 01:11:04.837 numjobs=1 01:11:04.837 01:11:04.837 verify_dump=1 01:11:04.837 verify_backlog=512 01:11:04.837 verify_state_save=0 01:11:04.837 do_verify=1 01:11:04.837 verify=crc32c-intel 01:11:04.837 [job0] 01:11:04.837 filename=/dev/nvme0n1 01:11:04.837 [job1] 01:11:04.837 filename=/dev/nvme0n2 01:11:04.837 [job2] 01:11:04.837 filename=/dev/nvme0n3 01:11:04.837 [job3] 01:11:04.837 filename=/dev/nvme0n4 01:11:04.837 Could not set queue depth (nvme0n1) 01:11:04.837 Could not set queue depth (nvme0n2) 01:11:04.837 Could not set queue depth (nvme0n3) 01:11:04.837 Could not set queue depth (nvme0n4) 01:11:04.837 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:04.837 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:04.837 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:04.837 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:04.837 fio-3.35 01:11:04.837 Starting 4 threads 01:11:06.213 01:11:06.213 job0: (groupid=0, jobs=1): err= 0: pid=93357: Mon Jul 22 11:08:11 2024 01:11:06.213 read: IOPS=1149, BW=4599KiB/s (4710kB/s)(4604KiB/1001msec) 01:11:06.213 slat (usec): min=7, max=167, avg=22.25, stdev=13.36 01:11:06.213 clat (usec): min=221, max=630, avg=419.95, stdev=63.51 01:11:06.213 lat (usec): min=238, max=663, avg=442.20, stdev=64.37 01:11:06.213 clat percentiles (usec): 01:11:06.213 | 1.00th=[ 302], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 363], 01:11:06.213 | 30.00th=[ 379], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 429], 01:11:06.213 | 70.00th=[ 449], 80.00th=[ 474], 90.00th=[ 506], 95.00th=[ 537], 01:11:06.213 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 627], 99.95th=[ 627], 01:11:06.213 | 99.99th=[ 627] 01:11:06.213 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:11:06.213 slat (nsec): min=13748, max=97188, avg=30557.52, stdev=11566.28 01:11:06.213 clat (usec): min=112, max=504, avg=285.20, stdev=60.25 01:11:06.213 lat (usec): min=137, max=532, avg=315.75, stdev=59.99 01:11:06.213 clat percentiles (usec): 01:11:06.213 | 1.00th=[ 167], 5.00th=[ 192], 10.00th=[ 208], 20.00th=[ 231], 01:11:06.213 | 30.00th=[ 249], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 302], 01:11:06.213 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 388], 01:11:06.213 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 498], 99.95th=[ 506], 01:11:06.213 | 99.99th=[ 506] 01:11:06.213 bw ( KiB/s): min= 6312, max= 6312, per=26.95%, avg=6312.00, stdev= 0.00, samples=1 01:11:06.213 iops : min= 1578, max= 1578, avg=1578.00, stdev= 0.00, samples=1 01:11:06.213 lat (usec) : 250=17.60%, 500=77.78%, 750=4.61% 01:11:06.213 cpu : usr=1.20%, sys=5.50%, ctx=2740, majf=0, minf=13 01:11:06.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:06.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.213 issued rwts: total=1151,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:06.213 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:06.213 job1: (groupid=0, jobs=1): err= 0: pid=93358: Mon Jul 22 11:08:11 2024 01:11:06.213 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:11:06.213 slat (usec): min=18, max=116, avg=29.73, stdev=12.83 01:11:06.213 clat (usec): min=155, max=763, avg=314.58, stdev=141.08 01:11:06.213 lat (usec): min=177, max=792, avg=344.31, stdev=148.40 01:11:06.213 clat percentiles (usec): 01:11:06.213 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 208], 01:11:06.213 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 265], 01:11:06.213 | 70.00th=[ 334], 80.00th=[ 478], 90.00th=[ 545], 95.00th=[ 594], 01:11:06.213 | 99.00th=[ 668], 99.50th=[ 693], 99.90th=[ 725], 99.95th=[ 766], 01:11:06.213 | 99.99th=[ 766] 01:11:06.213 write: IOPS=1545, BW=6182KiB/s (6330kB/s)(6188KiB/1001msec); 0 zone resets 01:11:06.213 slat (usec): min=26, max=130, avg=42.05, stdev=12.71 01:11:06.213 clat (usec): min=104, max=898, avg=255.71, stdev=114.70 01:11:06.213 lat (usec): min=139, max=947, avg=297.76, stdev=121.40 01:11:06.213 clat percentiles (usec): 01:11:06.213 | 1.00th=[ 119], 5.00th=[ 128], 10.00th=[ 141], 20.00th=[ 157], 01:11:06.213 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 200], 60.00th=[ 285], 01:11:06.213 | 70.00th=[ 343], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 445], 01:11:06.213 | 99.00th=[ 537], 99.50th=[ 603], 99.90th=[ 758], 99.95th=[ 898], 01:11:06.213 | 99.99th=[ 898] 01:11:06.213 bw ( KiB/s): min= 4184, max= 4184, per=17.86%, avg=4184.00, stdev= 0.00, samples=1 01:11:06.213 iops : min= 1046, max= 1046, avg=1046.00, stdev= 0.00, samples=1 01:11:06.213 lat (usec) : 250=54.40%, 500=36.62%, 750=8.86%, 1000=0.13% 01:11:06.213 cpu : usr=1.90%, sys=8.40%, ctx=3084, majf=0, minf=15 01:11:06.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:06.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.213 issued rwts: total=1536,1547,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:06.213 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:06.213 job2: (groupid=0, jobs=1): err= 0: pid=93359: Mon Jul 22 11:08:11 2024 01:11:06.213 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 01:11:06.213 slat (usec): min=9, max=100, avg=32.92, stdev=17.87 01:11:06.213 clat (usec): min=262, max=7683, avg=512.41, stdev=344.38 01:11:06.213 lat (usec): min=308, max=7720, avg=545.32, stdev=346.53 01:11:06.213 clat percentiles (usec): 01:11:06.213 | 1.00th=[ 314], 5.00th=[ 351], 10.00th=[ 367], 20.00th=[ 400], 01:11:06.213 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 498], 60.00th=[ 519], 01:11:06.213 | 70.00th=[ 545], 80.00th=[ 578], 90.00th=[ 619], 95.00th=[ 668], 01:11:06.213 | 99.00th=[ 840], 99.50th=[ 906], 99.90th=[ 7439], 99.95th=[ 7701], 01:11:06.213 | 99.99th=[ 7701] 01:11:06.213 write: IOPS=1240, BW=4963KiB/s (5082kB/s)(4968KiB/1001msec); 0 zone resets 01:11:06.213 slat (usec): min=10, max=221, avg=42.35, stdev=19.34 01:11:06.213 clat (usec): min=97, max=638, avg=306.93, stdev=89.95 01:11:06.213 lat (usec): min=132, max=706, avg=349.28, stdev=95.64 01:11:06.213 clat percentiles (usec): 01:11:06.213 | 1.00th=[ 120], 5.00th=[ 145], 10.00th=[ 174], 20.00th=[ 227], 01:11:06.213 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 334], 01:11:06.213 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 445], 01:11:06.213 | 99.00th=[ 498], 99.50th=[ 523], 99.90th=[ 627], 99.95th=[ 635], 01:11:06.213 | 99.99th=[ 635] 01:11:06.213 bw ( KiB/s): min= 4096, max= 4096, per=17.49%, avg=4096.00, stdev= 0.00, samples=1 01:11:06.213 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 01:11:06.213 lat (usec) : 100=0.04%, 250=13.46%, 500=63.86%, 750=21.98%, 1000=0.44% 01:11:06.213 lat (msec) : 2=0.09%, 4=0.04%, 10=0.09% 01:11:06.213 cpu : usr=1.50%, sys=6.60%, ctx=2379, majf=0, minf=12 01:11:06.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:06.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.214 issued rwts: total=1024,1242,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:06.214 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:06.214 job3: (groupid=0, jobs=1): err= 0: pid=93360: Mon Jul 22 11:08:11 2024 01:11:06.214 read: IOPS=1150, BW=4603KiB/s (4714kB/s)(4608KiB/1001msec) 01:11:06.214 slat (usec): min=7, max=104, avg=20.62, stdev=11.99 01:11:06.214 clat (usec): min=238, max=626, avg=420.91, stdev=60.89 01:11:06.214 lat (usec): min=253, max=667, avg=441.53, stdev=61.87 01:11:06.214 clat percentiles (usec): 01:11:06.214 | 1.00th=[ 310], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 367], 01:11:06.214 | 30.00th=[ 383], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 433], 01:11:06.214 | 70.00th=[ 449], 80.00th=[ 474], 90.00th=[ 506], 95.00th=[ 529], 01:11:06.214 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 619], 99.95th=[ 627], 01:11:06.214 | 99.99th=[ 627] 01:11:06.214 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:11:06.214 slat (usec): min=13, max=113, avg=31.02, stdev=11.63 01:11:06.214 clat (usec): min=137, max=517, avg=285.00, stdev=60.90 01:11:06.214 lat (usec): min=165, max=557, avg=316.01, stdev=60.62 01:11:06.214 clat percentiles (usec): 01:11:06.214 | 1.00th=[ 167], 5.00th=[ 192], 10.00th=[ 206], 20.00th=[ 229], 01:11:06.214 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 285], 60.00th=[ 302], 01:11:06.214 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 388], 01:11:06.214 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 506], 99.95th=[ 519], 01:11:06.214 | 99.99th=[ 519] 01:11:06.214 bw ( KiB/s): min= 6312, max= 6312, per=26.95%, avg=6312.00, stdev= 0.00, samples=1 01:11:06.214 iops : min= 1578, max= 1578, avg=1578.00, stdev= 0.00, samples=1 01:11:06.214 lat (usec) : 250=17.04%, 500=77.90%, 750=5.06% 01:11:06.214 cpu : usr=0.70%, sys=5.80%, ctx=2756, majf=0, minf=5 01:11:06.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:06.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:06.214 issued rwts: total=1152,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:06.214 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:06.214 01:11:06.214 Run status group 0 (all jobs): 01:11:06.214 READ: bw=19.0MiB/s (19.9MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=19.0MiB (19.9MB), run=1001-1001msec 01:11:06.214 WRITE: bw=22.9MiB/s (24.0MB/s), 4963KiB/s-6182KiB/s (5082kB/s-6330kB/s), io=22.9MiB (24.0MB), run=1001-1001msec 01:11:06.214 01:11:06.214 Disk stats (read/write): 01:11:06.214 nvme0n1: ios=1073/1269, merge=0/0, ticks=446/383, in_queue=829, util=87.76% 01:11:06.214 nvme0n2: ios=1073/1460, merge=0/0, ticks=434/399, in_queue=833, util=89.77% 01:11:06.214 nvme0n3: ios=887/1024, merge=0/0, ticks=451/340, in_queue=791, util=88.13% 01:11:06.214 nvme0n4: ios=1024/1273, merge=0/0, ticks=424/387, in_queue=811, util=89.62% 01:11:06.214 11:08:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:11:06.214 [global] 01:11:06.214 thread=1 01:11:06.214 invalidate=1 01:11:06.214 rw=write 01:11:06.214 time_based=1 01:11:06.214 runtime=1 01:11:06.214 ioengine=libaio 01:11:06.214 direct=1 01:11:06.214 bs=4096 01:11:06.214 iodepth=128 01:11:06.214 norandommap=0 01:11:06.214 numjobs=1 01:11:06.214 01:11:06.214 verify_dump=1 01:11:06.214 verify_backlog=512 01:11:06.214 verify_state_save=0 01:11:06.214 do_verify=1 01:11:06.214 verify=crc32c-intel 01:11:06.214 [job0] 01:11:06.214 filename=/dev/nvme0n1 01:11:06.214 [job1] 01:11:06.214 filename=/dev/nvme0n2 01:11:06.214 [job2] 01:11:06.214 filename=/dev/nvme0n3 01:11:06.214 [job3] 01:11:06.214 filename=/dev/nvme0n4 01:11:06.214 Could not set queue depth (nvme0n1) 01:11:06.214 Could not set queue depth (nvme0n2) 01:11:06.214 Could not set queue depth (nvme0n3) 01:11:06.214 Could not set queue depth (nvme0n4) 01:11:06.214 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:06.214 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:06.214 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:06.214 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:06.214 fio-3.35 01:11:06.214 Starting 4 threads 01:11:07.588 01:11:07.588 job0: (groupid=0, jobs=1): err= 0: pid=93414: Mon Jul 22 11:08:12 2024 01:11:07.588 read: IOPS=3728, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1005msec) 01:11:07.588 slat (usec): min=3, max=6352, avg=134.28, stdev=602.88 01:11:07.588 clat (usec): min=837, max=28570, avg=16996.71, stdev=4839.00 01:11:07.588 lat (usec): min=5139, max=28587, avg=17130.99, stdev=4862.32 01:11:07.588 clat percentiles (usec): 01:11:07.588 | 1.00th=[ 9110], 5.00th=[11076], 10.00th=[11600], 20.00th=[11994], 01:11:07.588 | 30.00th=[12518], 40.00th=[14877], 50.00th=[17433], 60.00th=[18744], 01:11:07.588 | 70.00th=[20317], 80.00th=[21627], 90.00th=[23725], 95.00th=[24249], 01:11:07.588 | 99.00th=[27132], 99.50th=[27919], 99.90th=[28181], 99.95th=[28443], 01:11:07.588 | 99.99th=[28443] 01:11:07.588 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 01:11:07.588 slat (usec): min=4, max=5118, avg=114.69, stdev=475.45 01:11:07.588 clat (usec): min=8207, max=26891, avg=15401.33, stdev=4036.67 01:11:07.588 lat (usec): min=8223, max=27138, avg=15516.03, stdev=4044.57 01:11:07.588 clat percentiles (usec): 01:11:07.588 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[11731], 01:11:07.588 | 30.00th=[12125], 40.00th=[12911], 50.00th=[15401], 60.00th=[16909], 01:11:07.588 | 70.00th=[17957], 80.00th=[19268], 90.00th=[20317], 95.00th=[21627], 01:11:07.588 | 99.00th=[25297], 99.50th=[25822], 99.90th=[26870], 99.95th=[26870], 01:11:07.588 | 99.99th=[26870] 01:11:07.588 bw ( KiB/s): min=13400, max=19368, per=28.93%, avg=16384.00, stdev=4220.01, samples=2 01:11:07.588 iops : min= 3350, max= 4842, avg=4096.00, stdev=1055.00, samples=2 01:11:07.588 lat (usec) : 1000=0.01% 01:11:07.588 lat (msec) : 10=5.87%, 20=71.73%, 50=22.39% 01:11:07.588 cpu : usr=3.59%, sys=11.35%, ctx=874, majf=0, minf=17 01:11:07.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:11:07.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:07.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:07.588 issued rwts: total=3747,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:07.588 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:07.588 job1: (groupid=0, jobs=1): err= 0: pid=93415: Mon Jul 22 11:08:12 2024 01:11:07.588 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 01:11:07.588 slat (usec): min=3, max=5259, avg=160.56, stdev=594.69 01:11:07.588 clat (usec): min=6221, max=27165, avg=20176.54, stdev=2530.47 01:11:07.588 lat (usec): min=6233, max=27183, avg=20337.10, stdev=2511.28 01:11:07.588 clat percentiles (usec): 01:11:07.588 | 1.00th=[11469], 5.00th=[16581], 10.00th=[17433], 20.00th=[18482], 01:11:07.588 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20055], 60.00th=[20841], 01:11:07.588 | 70.00th=[21365], 80.00th=[21890], 90.00th=[23725], 95.00th=[24249], 01:11:07.588 | 99.00th=[25560], 99.50th=[26608], 99.90th=[27132], 99.95th=[27132], 01:11:07.588 | 99.99th=[27132] 01:11:07.588 write: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1004msec); 0 zone resets 01:11:07.588 slat (usec): min=5, max=5862, avg=155.65, stdev=596.66 01:11:07.588 clat (usec): min=3030, max=27487, avg=20674.21, stdev=2561.40 01:11:07.588 lat (usec): min=3903, max=27505, avg=20829.86, stdev=2521.71 01:11:07.588 clat percentiles (usec): 01:11:07.588 | 1.00th=[14091], 5.00th=[16712], 10.00th=[17957], 20.00th=[19530], 01:11:07.588 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 01:11:07.588 | 70.00th=[21890], 80.00th=[22676], 90.00th=[23725], 95.00th=[24511], 01:11:07.588 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[27132], 01:11:07.588 | 99.99th=[27395] 01:11:07.588 bw ( KiB/s): min=12263, max=12288, per=21.67%, avg=12275.50, stdev=17.68, samples=2 01:11:07.588 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 01:11:07.588 lat (msec) : 4=0.08%, 10=0.57%, 20=37.07%, 50=62.28% 01:11:07.588 cpu : usr=3.19%, sys=9.77%, ctx=1067, majf=0, minf=9 01:11:07.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:11:07.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:07.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:07.588 issued rwts: total=3072,3097,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:07.588 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:07.588 job2: (groupid=0, jobs=1): err= 0: pid=93416: Mon Jul 22 11:08:12 2024 01:11:07.588 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 01:11:07.588 slat (usec): min=3, max=6262, avg=132.90, stdev=548.11 01:11:07.588 clat (usec): min=9624, max=28166, avg=17806.85, stdev=4466.21 01:11:07.588 lat (usec): min=10577, max=28178, avg=17939.76, stdev=4483.41 01:11:07.588 clat percentiles (usec): 01:11:07.588 | 1.00th=[10814], 5.00th=[12387], 10.00th=[12518], 20.00th=[13042], 01:11:07.588 | 30.00th=[13304], 40.00th=[15008], 50.00th=[18744], 60.00th=[19792], 01:11:07.588 | 70.00th=[20841], 80.00th=[21890], 90.00th=[23987], 95.00th=[24773], 01:11:07.588 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28181], 99.95th=[28181], 01:11:07.588 | 99.99th=[28181] 01:11:07.588 write: IOPS=3958, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1002msec); 0 zone resets 01:11:07.588 slat (usec): min=6, max=5399, avg=124.64, stdev=487.90 01:11:07.588 clat (usec): min=1938, max=22468, avg=15819.92, stdev=3402.46 01:11:07.588 lat (usec): min=2000, max=22493, avg=15944.55, stdev=3412.65 01:11:07.588 clat percentiles (usec): 01:11:07.588 | 1.00th=[ 7373], 5.00th=[10814], 10.00th=[11338], 20.00th=[12125], 01:11:07.588 | 30.00th=[13566], 40.00th=[14615], 50.00th=[16188], 60.00th=[17433], 01:11:07.588 | 70.00th=[18482], 80.00th=[19268], 90.00th=[20055], 95.00th=[20317], 01:11:07.588 | 99.00th=[21627], 99.50th=[21627], 99.90th=[22414], 99.95th=[22414], 01:11:07.588 | 99.99th=[22414] 01:11:07.589 bw ( KiB/s): min=13320, max=17384, per=27.10%, avg=15352.00, stdev=2873.68, samples=2 01:11:07.589 iops : min= 3330, max= 4348, avg=3839.00, stdev=719.83, samples=2 01:11:07.589 lat (msec) : 2=0.01%, 4=0.32%, 10=0.34%, 20=76.17%, 50=23.15% 01:11:07.589 cpu : usr=3.60%, sys=11.89%, ctx=931, majf=0, minf=13 01:11:07.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:11:07.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:07.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:07.589 issued rwts: total=3584,3966,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:07.589 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:07.589 job3: (groupid=0, jobs=1): err= 0: pid=93417: Mon Jul 22 11:08:12 2024 01:11:07.589 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1003msec) 01:11:07.589 slat (usec): min=3, max=5810, avg=162.26, stdev=597.11 01:11:07.589 clat (usec): min=191, max=28089, avg=20430.28, stdev=2980.75 01:11:07.589 lat (usec): min=3220, max=29165, avg=20592.54, stdev=2967.60 01:11:07.589 clat percentiles (usec): 01:11:07.589 | 1.00th=[ 5080], 5.00th=[16909], 10.00th=[17957], 20.00th=[19268], 01:11:07.589 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20579], 60.00th=[21103], 01:11:07.589 | 70.00th=[21365], 80.00th=[22152], 90.00th=[23725], 95.00th=[24511], 01:11:07.589 | 99.00th=[25560], 99.50th=[26870], 99.90th=[27132], 99.95th=[28181], 01:11:07.589 | 99.99th=[28181] 01:11:07.589 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 01:11:07.589 slat (usec): min=11, max=5352, avg=155.97, stdev=571.33 01:11:07.589 clat (usec): min=13963, max=26436, avg=20668.21, stdev=1793.16 01:11:07.589 lat (usec): min=13986, max=26485, avg=20824.18, stdev=1727.86 01:11:07.589 clat percentiles (usec): 01:11:07.589 | 1.00th=[15926], 5.00th=[17957], 10.00th=[18744], 20.00th=[19530], 01:11:07.589 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 01:11:07.589 | 70.00th=[21627], 80.00th=[22152], 90.00th=[23200], 95.00th=[23725], 01:11:07.589 | 99.00th=[24773], 99.50th=[25035], 99.90th=[26084], 99.95th=[26346], 01:11:07.589 | 99.99th=[26346] 01:11:07.589 bw ( KiB/s): min=12288, max=12312, per=21.72%, avg=12300.00, stdev=16.97, samples=2 01:11:07.589 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 01:11:07.589 lat (usec) : 250=0.02% 01:11:07.589 lat (msec) : 4=0.18%, 10=0.86%, 20=31.92%, 50=67.03% 01:11:07.589 cpu : usr=2.40%, sys=10.78%, ctx=1077, majf=0, minf=11 01:11:07.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:11:07.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:07.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:07.589 issued rwts: total=3063,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:07.589 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:07.589 01:11:07.589 Run status group 0 (all jobs): 01:11:07.589 READ: bw=52.3MiB/s (54.9MB/s), 11.9MiB/s-14.6MiB/s (12.5MB/s-15.3MB/s), io=52.6MiB (55.2MB), run=1002-1005msec 01:11:07.589 WRITE: bw=55.3MiB/s (58.0MB/s), 12.0MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=55.6MiB (58.3MB), run=1002-1005msec 01:11:07.589 01:11:07.589 Disk stats (read/write): 01:11:07.589 nvme0n1: ios=3331/3584, merge=0/0, ticks=14216/12392, in_queue=26608, util=87.54% 01:11:07.589 nvme0n2: ios=2587/2658, merge=0/0, ticks=12553/12009, in_queue=24562, util=88.27% 01:11:07.589 nvme0n3: ios=3072/3561, merge=0/0, ticks=12092/12034, in_queue=24126, util=89.21% 01:11:07.589 nvme0n4: ios=2560/2620, merge=0/0, ticks=12558/12144, in_queue=24702, util=89.27% 01:11:07.589 11:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:11:07.589 [global] 01:11:07.589 thread=1 01:11:07.589 invalidate=1 01:11:07.589 rw=randwrite 01:11:07.589 time_based=1 01:11:07.589 runtime=1 01:11:07.589 ioengine=libaio 01:11:07.589 direct=1 01:11:07.589 bs=4096 01:11:07.589 iodepth=128 01:11:07.589 norandommap=0 01:11:07.589 numjobs=1 01:11:07.589 01:11:07.589 verify_dump=1 01:11:07.589 verify_backlog=512 01:11:07.589 verify_state_save=0 01:11:07.589 do_verify=1 01:11:07.589 verify=crc32c-intel 01:11:07.589 [job0] 01:11:07.589 filename=/dev/nvme0n1 01:11:07.589 [job1] 01:11:07.589 filename=/dev/nvme0n2 01:11:07.589 [job2] 01:11:07.589 filename=/dev/nvme0n3 01:11:07.589 [job3] 01:11:07.589 filename=/dev/nvme0n4 01:11:07.589 Could not set queue depth (nvme0n1) 01:11:07.589 Could not set queue depth (nvme0n2) 01:11:07.589 Could not set queue depth (nvme0n3) 01:11:07.589 Could not set queue depth (nvme0n4) 01:11:07.589 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:07.589 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:07.589 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:07.589 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:07.589 fio-3.35 01:11:07.589 Starting 4 threads 01:11:08.960 01:11:08.960 job0: (groupid=0, jobs=1): err= 0: pid=93471: Mon Jul 22 11:08:13 2024 01:11:08.960 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 01:11:08.960 slat (usec): min=4, max=11663, avg=219.68, stdev=997.68 01:11:08.960 clat (usec): min=16321, max=56294, avg=25717.42, stdev=6430.75 01:11:08.960 lat (usec): min=17158, max=56328, avg=25937.09, stdev=6559.24 01:11:08.960 clat percentiles (usec): 01:11:08.960 | 1.00th=[18220], 5.00th=[19792], 10.00th=[20317], 20.00th=[21365], 01:11:08.960 | 30.00th=[21627], 40.00th=[21627], 50.00th=[22414], 60.00th=[25297], 01:11:08.960 | 70.00th=[28443], 80.00th=[30540], 90.00th=[34341], 95.00th=[35914], 01:11:08.960 | 99.00th=[50594], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 01:11:08.960 | 99.99th=[56361] 01:11:08.960 write: IOPS=1871, BW=7484KiB/s (7664kB/s)(7544KiB/1008msec); 0 zone resets 01:11:08.960 slat (usec): min=5, max=19208, avg=343.58, stdev=1289.09 01:11:08.960 clat (usec): min=7690, max=98985, avg=44897.12, stdev=16358.68 01:11:08.960 lat (msec): min=22, max=102, avg=45.24, stdev=16.42 01:11:08.960 clat percentiles (usec): 01:11:08.960 | 1.00th=[22938], 5.00th=[28443], 10.00th=[28967], 20.00th=[31851], 01:11:08.960 | 30.00th=[35390], 40.00th=[38011], 50.00th=[39060], 60.00th=[40109], 01:11:08.960 | 70.00th=[46924], 80.00th=[61080], 90.00th=[70779], 95.00th=[71828], 01:11:08.960 | 99.00th=[94897], 99.50th=[95945], 99.90th=[98042], 99.95th=[99091], 01:11:08.960 | 99.99th=[99091] 01:11:08.960 bw ( KiB/s): min= 5868, max= 8192, per=13.65%, avg=7030.00, stdev=1643.32, samples=2 01:11:08.960 iops : min= 1467, max= 2048, avg=1757.50, stdev=410.83, samples=2 01:11:08.960 lat (msec) : 10=0.03%, 20=4.00%, 50=80.19%, 100=15.78% 01:11:08.961 cpu : usr=1.49%, sys=5.86%, ctx=606, majf=0, minf=19 01:11:08.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 01:11:08.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:08.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:08.961 issued rwts: total=1536,1886,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:08.961 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:08.961 job1: (groupid=0, jobs=1): err= 0: pid=93472: Mon Jul 22 11:08:13 2024 01:11:08.961 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 01:11:08.961 slat (usec): min=6, max=17983, avg=144.18, stdev=949.18 01:11:08.961 clat (usec): min=5230, max=72281, avg=17212.25, stdev=10523.16 01:11:08.961 lat (usec): min=5243, max=72298, avg=17356.43, stdev=10602.10 01:11:08.961 clat percentiles (usec): 01:11:08.961 | 1.00th=[ 7177], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10159], 01:11:08.961 | 30.00th=[10552], 40.00th=[11207], 50.00th=[13042], 60.00th=[16909], 01:11:08.961 | 70.00th=[19530], 80.00th=[23200], 90.00th=[28705], 95.00th=[30016], 01:11:08.961 | 99.00th=[64750], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 01:11:08.961 | 99.99th=[71828] 01:11:08.961 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1004msec); 0 zone resets 01:11:08.961 slat (usec): min=6, max=15954, avg=153.97, stdev=724.76 01:11:08.961 clat (usec): min=1935, max=79400, avg=21629.14, stdev=14186.54 01:11:08.961 lat (usec): min=4734, max=79421, avg=21783.10, stdev=14267.91 01:11:08.961 clat percentiles (usec): 01:11:08.961 | 1.00th=[ 5538], 5.00th=[ 7767], 10.00th=[ 8356], 20.00th=[11207], 01:11:08.961 | 30.00th=[17957], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 01:11:08.961 | 70.00th=[20055], 80.00th=[20841], 90.00th=[39584], 95.00th=[58459], 01:11:08.961 | 99.00th=[74974], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 01:11:08.961 | 99.99th=[79168] 01:11:08.961 bw ( KiB/s): min=12553, max=13944, per=25.72%, avg=13248.50, stdev=983.59, samples=2 01:11:08.961 iops : min= 3138, max= 3486, avg=3312.00, stdev=246.07, samples=2 01:11:08.961 lat (msec) : 2=0.02%, 10=16.73%, 20=51.27%, 50=26.76%, 100=5.22% 01:11:08.961 cpu : usr=3.49%, sys=10.07%, ctx=475, majf=0, minf=9 01:11:08.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 01:11:08.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:08.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:08.961 issued rwts: total=3072,3437,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:08.961 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:08.961 job2: (groupid=0, jobs=1): err= 0: pid=93473: Mon Jul 22 11:08:13 2024 01:11:08.961 read: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec) 01:11:08.961 slat (usec): min=4, max=19568, avg=240.62, stdev=1219.93 01:11:08.961 clat (usec): min=15979, max=70759, avg=28846.29, stdev=11343.96 01:11:08.961 lat (usec): min=15991, max=70797, avg=29086.91, stdev=11435.45 01:11:08.961 clat percentiles (usec): 01:11:08.961 | 1.00th=[16057], 5.00th=[19530], 10.00th=[20579], 20.00th=[21365], 01:11:08.961 | 30.00th=[21627], 40.00th=[21890], 50.00th=[23462], 60.00th=[28705], 01:11:08.961 | 70.00th=[31327], 80.00th=[34866], 90.00th=[45876], 95.00th=[53216], 01:11:08.961 | 99.00th=[68682], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 01:11:08.961 | 99.99th=[70779] 01:11:08.961 write: IOPS=1921, BW=7688KiB/s (7872kB/s)(7780KiB/1012msec); 0 zone resets 01:11:08.961 slat (usec): min=6, max=21725, avg=317.43, stdev=1390.82 01:11:08.961 clat (msec): min=8, max=101, avg=43.14, stdev=17.56 01:11:08.961 lat (msec): min=15, max=101, avg=43.46, stdev=17.64 01:11:08.961 clat percentiles (msec): 01:11:08.961 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 29], 01:11:08.961 | 30.00th=[ 34], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 40], 01:11:08.961 | 70.00th=[ 43], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 79], 01:11:08.961 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 01:11:08.961 | 99.99th=[ 102] 01:11:08.961 bw ( KiB/s): min= 6344, max= 8208, per=14.12%, avg=7276.00, stdev=1318.05, samples=2 01:11:08.961 iops : min= 1586, max= 2052, avg=1819.00, stdev=329.51, samples=2 01:11:08.961 lat (msec) : 10=0.03%, 20=4.88%, 50=79.46%, 100=15.28%, 250=0.34% 01:11:08.961 cpu : usr=2.37%, sys=5.04%, ctx=588, majf=0, minf=9 01:11:08.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 01:11:08.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:08.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:08.961 issued rwts: total=1536,1945,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:08.961 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:08.961 job3: (groupid=0, jobs=1): err= 0: pid=93474: Mon Jul 22 11:08:13 2024 01:11:08.961 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 01:11:08.961 slat (usec): min=5, max=5181, avg=86.10, stdev=398.71 01:11:08.961 clat (usec): min=6684, max=16062, avg=11297.33, stdev=1377.97 01:11:08.961 lat (usec): min=6718, max=16080, avg=11383.44, stdev=1390.59 01:11:08.961 clat percentiles (usec): 01:11:08.961 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10290], 01:11:08.961 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 01:11:08.961 | 70.00th=[11863], 80.00th=[12256], 90.00th=[13042], 95.00th=[13698], 01:11:08.961 | 99.00th=[15139], 99.50th=[15401], 99.90th=[16057], 99.95th=[16057], 01:11:08.961 | 99.99th=[16057] 01:11:08.961 write: IOPS=5753, BW=22.5MiB/s (23.6MB/s)(22.5MiB/1002msec); 0 zone resets 01:11:08.961 slat (usec): min=11, max=5092, avg=81.76, stdev=357.73 01:11:08.961 clat (usec): min=1292, max=16070, avg=10935.43, stdev=1466.47 01:11:08.961 lat (usec): min=1312, max=16706, avg=11017.19, stdev=1459.36 01:11:08.961 clat percentiles (usec): 01:11:08.961 | 1.00th=[ 6390], 5.00th=[ 7767], 10.00th=[ 9503], 20.00th=[10552], 01:11:08.961 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 01:11:08.961 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12780], 01:11:08.961 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16057], 99.95th=[16057], 01:11:08.961 | 99.99th=[16057] 01:11:08.961 bw ( KiB/s): min=21168, max=24064, per=43.90%, avg=22616.00, stdev=2047.78, samples=2 01:11:08.961 iops : min= 5292, max= 6016, avg=5654.00, stdev=511.95, samples=2 01:11:08.961 lat (msec) : 2=0.14%, 10=13.49%, 20=86.36% 01:11:08.961 cpu : usr=4.60%, sys=17.38%, ctx=699, majf=0, minf=9 01:11:08.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 01:11:08.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:08.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:08.961 issued rwts: total=5632,5765,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:08.961 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:08.961 01:11:08.961 Run status group 0 (all jobs): 01:11:08.961 READ: bw=45.5MiB/s (47.7MB/s), 6071KiB/s-22.0MiB/s (6217kB/s-23.0MB/s), io=46.0MiB (48.2MB), run=1002-1012msec 01:11:08.961 WRITE: bw=50.3MiB/s (52.8MB/s), 7484KiB/s-22.5MiB/s (7664kB/s-23.6MB/s), io=50.9MiB (53.4MB), run=1002-1012msec 01:11:08.961 01:11:08.961 Disk stats (read/write): 01:11:08.961 nvme0n1: ios=1553/1536, merge=0/0, ticks=18563/31341, in_queue=49904, util=85.56% 01:11:08.961 nvme0n2: ios=2588/2568, merge=0/0, ticks=43746/60361, in_queue=104107, util=87.79% 01:11:08.961 nvme0n3: ios=1536/1559, merge=0/0, ticks=22029/30588, in_queue=52617, util=88.66% 01:11:08.961 nvme0n4: ios=4608/5111, merge=0/0, ticks=24467/23894, in_queue=48361, util=89.74% 01:11:08.961 11:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:11:08.961 11:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=93487 01:11:08.961 11:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:11:08.961 11:08:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:11:08.961 [global] 01:11:08.961 thread=1 01:11:08.961 invalidate=1 01:11:08.961 rw=read 01:11:08.961 time_based=1 01:11:08.961 runtime=10 01:11:08.961 ioengine=libaio 01:11:08.961 direct=1 01:11:08.961 bs=4096 01:11:08.961 iodepth=1 01:11:08.961 norandommap=1 01:11:08.961 numjobs=1 01:11:08.961 01:11:08.961 [job0] 01:11:08.961 filename=/dev/nvme0n1 01:11:08.961 [job1] 01:11:08.961 filename=/dev/nvme0n2 01:11:08.961 [job2] 01:11:08.961 filename=/dev/nvme0n3 01:11:08.961 [job3] 01:11:08.961 filename=/dev/nvme0n4 01:11:08.961 Could not set queue depth (nvme0n1) 01:11:08.961 Could not set queue depth (nvme0n2) 01:11:08.961 Could not set queue depth (nvme0n3) 01:11:08.961 Could not set queue depth (nvme0n4) 01:11:09.218 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:09.218 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:09.218 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:09.218 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:11:09.218 fio-3.35 01:11:09.218 Starting 4 threads 01:11:12.501 11:08:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:11:12.501 fio: pid=93540, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:11:12.501 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=47034368, buflen=4096 01:11:12.501 11:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:11:12.501 fio: pid=93539, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:11:12.501 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=59502592, buflen=4096 01:11:12.501 11:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:11:12.501 11:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:11:12.760 fio: pid=93537, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:11:12.760 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4321280, buflen=4096 01:11:12.760 11:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:11:12.760 11:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:11:13.020 fio: pid=93538, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 01:11:13.020 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=66764800, buflen=4096 01:11:13.020 01:11:13.020 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93537: Mon Jul 22 11:08:18 2024 01:11:13.020 read: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(68.1MiB/3511msec) 01:11:13.020 slat (usec): min=12, max=11686, avg=20.34, stdev=145.57 01:11:13.020 clat (usec): min=119, max=2684, avg=179.37, stdev=44.12 01:11:13.020 lat (usec): min=134, max=11910, avg=199.70, stdev=153.32 01:11:13.020 clat percentiles (usec): 01:11:13.020 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 01:11:13.020 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 180], 01:11:13.020 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 239], 01:11:13.020 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 420], 99.95th=[ 594], 01:11:13.020 | 99.99th=[ 2278] 01:11:13.020 bw ( KiB/s): min=19944, max=21240, per=33.02%, avg=20680.00, stdev=417.11, samples=6 01:11:13.020 iops : min= 4986, max= 5310, avg=5170.00, stdev=104.28, samples=6 01:11:13.020 lat (usec) : 250=96.14%, 500=3.80%, 750=0.03%, 1000=0.01% 01:11:13.020 lat (msec) : 2=0.01%, 4=0.01% 01:11:13.020 cpu : usr=1.62%, sys=7.35%, ctx=17446, majf=0, minf=1 01:11:13.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:13.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 issued rwts: total=17440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:13.020 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:13.020 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93538: Mon Jul 22 11:08:18 2024 01:11:13.020 read: IOPS=4271, BW=16.7MiB/s (17.5MB/s)(63.7MiB/3816msec) 01:11:13.020 slat (usec): min=12, max=15313, avg=25.02, stdev=215.16 01:11:13.020 clat (usec): min=129, max=31634, avg=207.15, stdev=254.08 01:11:13.020 lat (usec): min=154, max=31669, avg=232.18, stdev=336.34 01:11:13.020 clat percentiles (usec): 01:11:13.020 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 01:11:13.020 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 204], 01:11:13.020 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 255], 95.00th=[ 302], 01:11:13.020 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 586], 99.95th=[ 1037], 01:11:13.020 | 99.99th=[ 3752] 01:11:13.020 bw ( KiB/s): min=11267, max=18664, per=27.67%, avg=17331.86, stdev=2713.41, samples=7 01:11:13.020 iops : min= 2816, max= 4666, avg=4332.86, stdev=678.63, samples=7 01:11:13.020 lat (usec) : 250=88.80%, 500=11.07%, 750=0.04%, 1000=0.02% 01:11:13.020 lat (msec) : 2=0.04%, 4=0.02%, 50=0.01% 01:11:13.020 cpu : usr=1.57%, sys=7.10%, ctx=16315, majf=0, minf=1 01:11:13.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:13.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 issued rwts: total=16301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:13.020 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:13.020 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93539: Mon Jul 22 11:08:18 2024 01:11:13.020 read: IOPS=4482, BW=17.5MiB/s (18.4MB/s)(56.7MiB/3241msec) 01:11:13.020 slat (usec): min=14, max=7808, avg=22.23, stdev=89.57 01:11:13.020 clat (usec): min=136, max=2342, avg=198.74, stdev=41.83 01:11:13.020 lat (usec): min=156, max=8071, avg=220.97, stdev=99.00 01:11:13.020 clat percentiles (usec): 01:11:13.020 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 01:11:13.020 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 01:11:13.020 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 251], 01:11:13.020 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 420], 99.95th=[ 873], 01:11:13.020 | 99.99th=[ 1352] 01:11:13.020 bw ( KiB/s): min=17952, max=18784, per=29.19%, avg=18282.67, stdev=275.82, samples=6 01:11:13.020 iops : min= 4488, max= 4696, avg=4570.67, stdev=68.95, samples=6 01:11:13.020 lat (usec) : 250=94.89%, 500=5.02%, 750=0.01%, 1000=0.04% 01:11:13.020 lat (msec) : 2=0.02%, 4=0.01% 01:11:13.020 cpu : usr=1.64%, sys=7.22%, ctx=14532, majf=0, minf=1 01:11:13.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:13.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 issued rwts: total=14528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:13.020 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:13.020 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93540: Mon Jul 22 11:08:18 2024 01:11:13.020 read: IOPS=3898, BW=15.2MiB/s (16.0MB/s)(44.9MiB/2946msec) 01:11:13.020 slat (usec): min=15, max=130, avg=22.43, stdev= 9.15 01:11:13.020 clat (usec): min=122, max=2544, avg=231.92, stdev=48.82 01:11:13.020 lat (usec): min=187, max=2565, avg=254.35, stdev=48.97 01:11:13.020 clat percentiles (usec): 01:11:13.020 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 01:11:13.020 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 01:11:13.020 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 01:11:13.020 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 635], 99.95th=[ 816], 01:11:13.020 | 99.99th=[ 2278] 01:11:13.020 bw ( KiB/s): min=15336, max=15840, per=24.90%, avg=15596.80, stdev=212.01, samples=5 01:11:13.020 iops : min= 3834, max= 3960, avg=3899.20, stdev=53.00, samples=5 01:11:13.020 lat (usec) : 250=76.83%, 500=22.95%, 750=0.15%, 1000=0.02% 01:11:13.020 lat (msec) : 2=0.03%, 4=0.02% 01:11:13.020 cpu : usr=1.36%, sys=6.86%, ctx=11487, majf=0, minf=1 01:11:13.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:11:13.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:13.020 issued rwts: total=11484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:13.020 latency : target=0, window=0, percentile=100.00%, depth=1 01:11:13.020 01:11:13.020 Run status group 0 (all jobs): 01:11:13.020 READ: bw=61.2MiB/s (64.1MB/s), 15.2MiB/s-19.4MiB/s (16.0MB/s-20.3MB/s), io=233MiB (245MB), run=2946-3816msec 01:11:13.020 01:11:13.020 Disk stats (read/write): 01:11:13.020 nvme0n1: ios=16839/0, merge=0/0, ticks=3103/0, in_queue=3103, util=95.22% 01:11:13.021 nvme0n2: ios=15481/0, merge=0/0, ticks=3335/0, in_queue=3335, util=95.21% 01:11:13.021 nvme0n3: ios=14092/0, merge=0/0, ticks=2868/0, in_queue=2868, util=96.37% 01:11:13.021 nvme0n4: ios=11180/0, merge=0/0, ticks=2688/0, in_queue=2688, util=96.69% 01:11:13.021 11:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:11:13.021 11:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:11:13.280 11:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:11:13.280 11:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:11:13.856 11:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:11:13.856 11:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:11:14.122 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:11:14.122 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:11:14.381 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:11:14.381 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 93487 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:11:14.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:11:14.640 nvmf hotplug test: fio failed as expected 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:11:14.640 11:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 01:11:14.899 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:11:14.899 rmmod nvme_tcp 01:11:15.157 rmmod nvme_fabrics 01:11:15.157 rmmod nvme_keyring 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 92992 ']' 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 92992 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 92992 ']' 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 92992 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92992 01:11:15.157 killing process with pid 92992 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92992' 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 92992 01:11:15.157 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 92992 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:11:15.417 01:11:15.417 real 0m20.909s 01:11:15.417 user 1m17.555s 01:11:15.417 sys 0m11.818s 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 01:11:15.417 ************************************ 01:11:15.417 11:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:11:15.417 END TEST nvmf_fio_target 01:11:15.417 ************************************ 01:11:15.417 11:08:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:11:15.417 11:08:20 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:11:15.417 11:08:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:11:15.417 11:08:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:11:15.417 11:08:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:11:15.417 ************************************ 01:11:15.417 START TEST nvmf_bdevio 01:11:15.417 ************************************ 01:11:15.417 11:08:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:11:15.677 * Looking for test storage... 01:11:15.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:11:15.677 Cannot find device "nvmf_tgt_br" 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:11:15.677 Cannot find device "nvmf_tgt_br2" 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:11:15.677 Cannot find device "nvmf_tgt_br" 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:11:15.677 Cannot find device "nvmf_tgt_br2" 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:15.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:15.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:15.677 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:15.936 11:08:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:15.936 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:15.936 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:15.936 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:11:15.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:15.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 01:11:15.936 01:11:15.936 --- 10.0.0.2 ping statistics --- 01:11:15.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:15.936 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:11:15.936 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:11:15.936 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:15.936 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 01:11:15.937 01:11:15.937 --- 10.0.0.3 ping statistics --- 01:11:15.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:15.937 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:15.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:15.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 01:11:15.937 01:11:15.937 --- 10.0.0.1 ping statistics --- 01:11:15.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:15.937 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=93858 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 93858 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 93858 ']' 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:15.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:15.937 11:08:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:15.937 [2024-07-22 11:08:21.122019] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:15.937 [2024-07-22 11:08:21.122109] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:16.194 [2024-07-22 11:08:21.270317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:11:16.194 [2024-07-22 11:08:21.363199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:16.194 [2024-07-22 11:08:21.363258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:16.194 [2024-07-22 11:08:21.363272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:16.194 [2024-07-22 11:08:21.363292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:16.194 [2024-07-22 11:08:21.363302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:16.194 [2024-07-22 11:08:21.363477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:11:16.194 [2024-07-22 11:08:21.363658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 01:11:16.194 [2024-07-22 11:08:21.363757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 01:11:16.194 [2024-07-22 11:08:21.363765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:17.129 [2024-07-22 11:08:22.195193] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:17.129 Malloc0 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:17.129 [2024-07-22 11:08:22.273363] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:11:17.129 { 01:11:17.129 "params": { 01:11:17.129 "name": "Nvme$subsystem", 01:11:17.129 "trtype": "$TEST_TRANSPORT", 01:11:17.129 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:17.129 "adrfam": "ipv4", 01:11:17.129 "trsvcid": "$NVMF_PORT", 01:11:17.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:17.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:17.129 "hdgst": ${hdgst:-false}, 01:11:17.129 "ddgst": ${ddgst:-false} 01:11:17.129 }, 01:11:17.129 "method": "bdev_nvme_attach_controller" 01:11:17.129 } 01:11:17.129 EOF 01:11:17.129 )") 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 01:11:17.129 11:08:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:11:17.129 "params": { 01:11:17.129 "name": "Nvme1", 01:11:17.129 "trtype": "tcp", 01:11:17.129 "traddr": "10.0.0.2", 01:11:17.129 "adrfam": "ipv4", 01:11:17.129 "trsvcid": "4420", 01:11:17.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:17.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:17.129 "hdgst": false, 01:11:17.129 "ddgst": false 01:11:17.129 }, 01:11:17.129 "method": "bdev_nvme_attach_controller" 01:11:17.129 }' 01:11:17.129 [2024-07-22 11:08:22.333560] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:11:17.129 [2024-07-22 11:08:22.333671] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93919 ] 01:11:17.387 [2024-07-22 11:08:22.474528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:11:17.387 [2024-07-22 11:08:22.575422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:11:17.387 [2024-07-22 11:08:22.575580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:11:17.387 [2024-07-22 11:08:22.575591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:11:17.647 I/O targets: 01:11:17.647 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:11:17.647 01:11:17.647 01:11:17.647 CUnit - A unit testing framework for C - Version 2.1-3 01:11:17.647 http://cunit.sourceforge.net/ 01:11:17.647 01:11:17.647 01:11:17.647 Suite: bdevio tests on: Nvme1n1 01:11:17.647 Test: blockdev write read block ...passed 01:11:17.905 Test: blockdev write zeroes read block ...passed 01:11:17.905 Test: blockdev write zeroes read no split ...passed 01:11:17.905 Test: blockdev write zeroes read split ...passed 01:11:17.905 Test: blockdev write zeroes read split partial ...passed 01:11:17.905 Test: blockdev reset ...[2024-07-22 11:08:22.884965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:11:17.905 [2024-07-22 11:08:22.885318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4e90 (9): Bad file descriptor 01:11:17.905 [2024-07-22 11:08:22.899965] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:11:17.905 passed 01:11:17.905 Test: blockdev write read 8 blocks ...passed 01:11:17.905 Test: blockdev write read size > 128k ...passed 01:11:17.905 Test: blockdev write read invalid size ...passed 01:11:17.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:11:17.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:11:17.905 Test: blockdev write read max offset ...passed 01:11:17.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:11:17.905 Test: blockdev writev readv 8 blocks ...passed 01:11:17.905 Test: blockdev writev readv 30 x 1block ...passed 01:11:17.905 Test: blockdev writev readv block ...passed 01:11:17.905 Test: blockdev writev readv size > 128k ...passed 01:11:17.905 Test: blockdev writev readv size > 128k in two iovs ...passed 01:11:17.905 Test: blockdev comparev and writev ...[2024-07-22 11:08:23.080102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.080160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:11:17.905 [2024-07-22 11:08:23.080198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.080209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:17.905 [2024-07-22 11:08:23.080759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.080788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:11:17.905 [2024-07-22 11:08:23.080807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.080819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:11:17.905 [2024-07-22 11:08:23.081315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.081354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:11:17.905 [2024-07-22 11:08:23.081376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.081387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:11:17.905 [2024-07-22 11:08:23.081781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.081804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:11:17.905 [2024-07-22 11:08:23.081822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:11:17.905 [2024-07-22 11:08:23.081833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:11:18.163 passed 01:11:18.163 Test: blockdev nvme passthru rw ...passed 01:11:18.164 Test: blockdev nvme passthru vendor specific ...[2024-07-22 11:08:23.166911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:11:18.164 [2024-07-22 11:08:23.167005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:11:18.164 passed 01:11:18.164 Test: blockdev nvme admin passthru ...[2024-07-22 11:08:23.167193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:11:18.164 [2024-07-22 11:08:23.167340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:11:18.164 [2024-07-22 11:08:23.167557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:11:18.164 [2024-07-22 11:08:23.167581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:11:18.164 [2024-07-22 11:08:23.167778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:11:18.164 [2024-07-22 11:08:23.167795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:11:18.164 passed 01:11:18.164 Test: blockdev copy ...passed 01:11:18.164 01:11:18.164 Run Summary: Type Total Ran Passed Failed Inactive 01:11:18.164 suites 1 1 n/a 0 0 01:11:18.164 tests 23 23 23 0 0 01:11:18.164 asserts 152 152 152 0 n/a 01:11:18.164 01:11:18.164 Elapsed time = 0.917 seconds 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:11:18.421 rmmod nvme_tcp 01:11:18.421 rmmod nvme_fabrics 01:11:18.421 rmmod nvme_keyring 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 93858 ']' 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 93858 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 93858 ']' 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 93858 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:11:18.421 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93858 01:11:18.679 killing process with pid 93858 01:11:18.679 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 01:11:18.679 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 01:11:18.679 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93858' 01:11:18.679 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 93858 01:11:18.679 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 93858 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:11:18.938 01:11:18.938 real 0m3.403s 01:11:18.938 user 0m12.248s 01:11:18.938 sys 0m0.897s 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 01:11:18.938 ************************************ 01:11:18.938 END TEST nvmf_bdevio 01:11:18.938 ************************************ 01:11:18.938 11:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:11:18.938 11:08:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:11:18.938 11:08:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:11:18.938 11:08:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:11:18.938 11:08:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:11:18.938 11:08:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:11:18.938 ************************************ 01:11:18.938 START TEST nvmf_auth_target 01:11:18.938 ************************************ 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:11:18.938 * Looking for test storage... 01:11:18.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:18.938 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:11:18.939 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:11:19.196 Cannot find device "nvmf_tgt_br" 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:11:19.196 Cannot find device "nvmf_tgt_br2" 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:11:19.196 Cannot find device "nvmf_tgt_br" 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:11:19.196 Cannot find device "nvmf_tgt_br2" 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:19.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:19.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:11:19.196 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:11:19.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:19.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 01:11:19.454 01:11:19.454 --- 10.0.0.2 ping statistics --- 01:11:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:19.454 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:11:19.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:19.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 01:11:19.454 01:11:19.454 --- 10.0.0.3 ping statistics --- 01:11:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:19.454 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:19.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:19.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:11:19.454 01:11:19.454 --- 10.0.0.1 ping statistics --- 01:11:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:19.454 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=94102 01:11:19.454 11:08:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 94102 01:11:19.455 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94102 ']' 01:11:19.455 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:19.455 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:19.455 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:19.455 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:19.455 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:20.390 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:20.390 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:11:20.390 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:11:20.390 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 01:11:20.390 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=94146 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc901bee06677662e8ca98432908159ef31933af4a7f34cb 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.efi 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc901bee06677662e8ca98432908159ef31933af4a7f34cb 0 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc901bee06677662e8ca98432908159ef31933af4a7f34cb 0 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc901bee06677662e8ca98432908159ef31933af4a7f34cb 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.efi 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.efi 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.efi 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3a374d8635346dcdeced75f7bf9387175983db6f68b5792d55581084d73bcf8e 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gzH 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3a374d8635346dcdeced75f7bf9387175983db6f68b5792d55581084d73bcf8e 3 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3a374d8635346dcdeced75f7bf9387175983db6f68b5792d55581084d73bcf8e 3 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3a374d8635346dcdeced75f7bf9387175983db6f68b5792d55581084d73bcf8e 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gzH 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gzH 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.gzH 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a67c60885eb3b2ee8f69c47de2fd7743 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3V3 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a67c60885eb3b2ee8f69c47de2fd7743 1 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a67c60885eb3b2ee8f69c47de2fd7743 1 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a67c60885eb3b2ee8f69c47de2fd7743 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3V3 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3V3 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.3V3 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 01:11:20.649 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2453f43730488f76646bd6f6fda29542766de09342dc4403 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hcX 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2453f43730488f76646bd6f6fda29542766de09342dc4403 2 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2453f43730488f76646bd6f6fda29542766de09342dc4403 2 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2453f43730488f76646bd6f6fda29542766de09342dc4403 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 01:11:20.650 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hcX 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hcX 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.hcX 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e124c6f9cfb30fe46b33aea145694ca2d7d0bb7dccdb6215 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Nhz 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e124c6f9cfb30fe46b33aea145694ca2d7d0bb7dccdb6215 2 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e124c6f9cfb30fe46b33aea145694ca2d7d0bb7dccdb6215 2 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:11:20.908 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e124c6f9cfb30fe46b33aea145694ca2d7d0bb7dccdb6215 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Nhz 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Nhz 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Nhz 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=208ab9c433df56456d15cfe5791d7fe6 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iQ8 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 208ab9c433df56456d15cfe5791d7fe6 1 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 208ab9c433df56456d15cfe5791d7fe6 1 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=208ab9c433df56456d15cfe5791d7fe6 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 01:11:20.909 11:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iQ8 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iQ8 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.iQ8 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7f16d085de7832278e56eb10fe1a0bf376342a43fe3914d97ec5145f2c8fa4f9 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1cc 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7f16d085de7832278e56eb10fe1a0bf376342a43fe3914d97ec5145f2c8fa4f9 3 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7f16d085de7832278e56eb10fe1a0bf376342a43fe3914d97ec5145f2c8fa4f9 3 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7f16d085de7832278e56eb10fe1a0bf376342a43fe3914d97ec5145f2c8fa4f9 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1cc 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1cc 01:11:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.1cc 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 94102 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94102 ']' 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:20.909 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:21.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 94146 /var/tmp/host.sock 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 94146 ']' 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:21.477 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.efi 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.efi 01:11:21.735 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.efi 01:11:21.994 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.gzH ]] 01:11:21.994 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gzH 01:11:21.994 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:21.994 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:21.994 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:21.994 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gzH 01:11:21.994 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gzH 01:11:22.253 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:11:22.253 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.3V3 01:11:22.253 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:22.253 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:22.253 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:22.253 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.3V3 01:11:22.253 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.3V3 01:11:22.512 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.hcX ]] 01:11:22.512 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hcX 01:11:22.512 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:22.512 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:22.512 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:22.512 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hcX 01:11:22.512 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hcX 01:11:22.771 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:11:22.771 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Nhz 01:11:22.771 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:22.771 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:22.771 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:22.771 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Nhz 01:11:22.771 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Nhz 01:11:23.030 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.iQ8 ]] 01:11:23.030 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iQ8 01:11:23.030 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:23.030 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:23.030 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:23.030 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iQ8 01:11:23.030 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iQ8 01:11:23.289 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 01:11:23.289 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1cc 01:11:23.289 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:23.289 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:23.289 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:23.289 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1cc 01:11:23.289 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1cc 01:11:23.548 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 01:11:23.548 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 01:11:23.548 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:11:23.548 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:23.548 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:23.548 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:23.807 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:24.066 01:11:24.066 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:24.066 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:24.066 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:24.324 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:24.324 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:24.324 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:24.324 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:24.324 11:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:24.324 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:24.324 { 01:11:24.324 "auth": { 01:11:24.324 "dhgroup": "null", 01:11:24.324 "digest": "sha256", 01:11:24.324 "state": "completed" 01:11:24.324 }, 01:11:24.324 "cntlid": 1, 01:11:24.324 "listen_address": { 01:11:24.324 "adrfam": "IPv4", 01:11:24.324 "traddr": "10.0.0.2", 01:11:24.324 "trsvcid": "4420", 01:11:24.324 "trtype": "TCP" 01:11:24.324 }, 01:11:24.324 "peer_address": { 01:11:24.324 "adrfam": "IPv4", 01:11:24.324 "traddr": "10.0.0.1", 01:11:24.324 "trsvcid": "33576", 01:11:24.324 "trtype": "TCP" 01:11:24.324 }, 01:11:24.324 "qid": 0, 01:11:24.324 "state": "enabled", 01:11:24.324 "thread": "nvmf_tgt_poll_group_000" 01:11:24.324 } 01:11:24.324 ]' 01:11:24.324 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:24.582 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:24.582 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:24.582 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:11:24.582 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:24.582 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:24.582 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:24.582 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:24.840 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:30.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:30.103 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:30.104 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:30.104 01:11:30.104 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:30.104 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:30.104 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:30.362 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:30.362 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:30.362 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:30.362 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:30.362 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:30.362 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:30.362 { 01:11:30.362 "auth": { 01:11:30.362 "dhgroup": "null", 01:11:30.362 "digest": "sha256", 01:11:30.362 "state": "completed" 01:11:30.362 }, 01:11:30.362 "cntlid": 3, 01:11:30.362 "listen_address": { 01:11:30.362 "adrfam": "IPv4", 01:11:30.362 "traddr": "10.0.0.2", 01:11:30.362 "trsvcid": "4420", 01:11:30.362 "trtype": "TCP" 01:11:30.362 }, 01:11:30.362 "peer_address": { 01:11:30.362 "adrfam": "IPv4", 01:11:30.362 "traddr": "10.0.0.1", 01:11:30.362 "trsvcid": "60246", 01:11:30.362 "trtype": "TCP" 01:11:30.362 }, 01:11:30.362 "qid": 0, 01:11:30.362 "state": "enabled", 01:11:30.362 "thread": "nvmf_tgt_poll_group_000" 01:11:30.362 } 01:11:30.362 ]' 01:11:30.362 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:30.629 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:30.629 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:30.629 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:11:30.629 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:30.629 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:30.629 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:30.629 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:30.906 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:31.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:31.848 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:32.106 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 01:11:32.106 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:32.106 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:32.106 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:11:32.106 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:11:32.107 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:32.107 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:32.107 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:32.107 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:32.107 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:32.107 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:32.107 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:32.364 01:11:32.364 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:32.364 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:32.364 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:32.623 { 01:11:32.623 "auth": { 01:11:32.623 "dhgroup": "null", 01:11:32.623 "digest": "sha256", 01:11:32.623 "state": "completed" 01:11:32.623 }, 01:11:32.623 "cntlid": 5, 01:11:32.623 "listen_address": { 01:11:32.623 "adrfam": "IPv4", 01:11:32.623 "traddr": "10.0.0.2", 01:11:32.623 "trsvcid": "4420", 01:11:32.623 "trtype": "TCP" 01:11:32.623 }, 01:11:32.623 "peer_address": { 01:11:32.623 "adrfam": "IPv4", 01:11:32.623 "traddr": "10.0.0.1", 01:11:32.623 "trsvcid": "60274", 01:11:32.623 "trtype": "TCP" 01:11:32.623 }, 01:11:32.623 "qid": 0, 01:11:32.623 "state": "enabled", 01:11:32.623 "thread": "nvmf_tgt_poll_group_000" 01:11:32.623 } 01:11:32.623 ]' 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:32.623 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:32.882 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:11:32.882 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:32.882 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:32.882 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:32.882 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:33.139 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:34.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:34.072 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:11:34.072 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:11:34.640 01:11:34.640 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:34.640 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:34.640 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:34.898 { 01:11:34.898 "auth": { 01:11:34.898 "dhgroup": "null", 01:11:34.898 "digest": "sha256", 01:11:34.898 "state": "completed" 01:11:34.898 }, 01:11:34.898 "cntlid": 7, 01:11:34.898 "listen_address": { 01:11:34.898 "adrfam": "IPv4", 01:11:34.898 "traddr": "10.0.0.2", 01:11:34.898 "trsvcid": "4420", 01:11:34.898 "trtype": "TCP" 01:11:34.898 }, 01:11:34.898 "peer_address": { 01:11:34.898 "adrfam": "IPv4", 01:11:34.898 "traddr": "10.0.0.1", 01:11:34.898 "trsvcid": "60292", 01:11:34.898 "trtype": "TCP" 01:11:34.898 }, 01:11:34.898 "qid": 0, 01:11:34.898 "state": "enabled", 01:11:34.898 "thread": "nvmf_tgt_poll_group_000" 01:11:34.898 } 01:11:34.898 ]' 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:34.898 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:34.898 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:11:34.898 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:34.898 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:34.898 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:34.898 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:35.462 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:36.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:36.027 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:36.286 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:36.851 01:11:36.852 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:36.852 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:36.852 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:37.109 { 01:11:37.109 "auth": { 01:11:37.109 "dhgroup": "ffdhe2048", 01:11:37.109 "digest": "sha256", 01:11:37.109 "state": "completed" 01:11:37.109 }, 01:11:37.109 "cntlid": 9, 01:11:37.109 "listen_address": { 01:11:37.109 "adrfam": "IPv4", 01:11:37.109 "traddr": "10.0.0.2", 01:11:37.109 "trsvcid": "4420", 01:11:37.109 "trtype": "TCP" 01:11:37.109 }, 01:11:37.109 "peer_address": { 01:11:37.109 "adrfam": "IPv4", 01:11:37.109 "traddr": "10.0.0.1", 01:11:37.109 "trsvcid": "60306", 01:11:37.109 "trtype": "TCP" 01:11:37.109 }, 01:11:37.109 "qid": 0, 01:11:37.109 "state": "enabled", 01:11:37.109 "thread": "nvmf_tgt_poll_group_000" 01:11:37.109 } 01:11:37.109 ]' 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:37.109 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:37.676 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:38.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:38.241 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:38.498 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:39.063 01:11:39.063 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:39.063 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:39.063 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:39.322 { 01:11:39.322 "auth": { 01:11:39.322 "dhgroup": "ffdhe2048", 01:11:39.322 "digest": "sha256", 01:11:39.322 "state": "completed" 01:11:39.322 }, 01:11:39.322 "cntlid": 11, 01:11:39.322 "listen_address": { 01:11:39.322 "adrfam": "IPv4", 01:11:39.322 "traddr": "10.0.0.2", 01:11:39.322 "trsvcid": "4420", 01:11:39.322 "trtype": "TCP" 01:11:39.322 }, 01:11:39.322 "peer_address": { 01:11:39.322 "adrfam": "IPv4", 01:11:39.322 "traddr": "10.0.0.1", 01:11:39.322 "trsvcid": "60336", 01:11:39.322 "trtype": "TCP" 01:11:39.322 }, 01:11:39.322 "qid": 0, 01:11:39.322 "state": "enabled", 01:11:39.322 "thread": "nvmf_tgt_poll_group_000" 01:11:39.322 } 01:11:39.322 ]' 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:39.322 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:39.581 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:40.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:40.515 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:40.774 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:41.032 01:11:41.032 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:41.032 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:41.032 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:41.291 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:41.291 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:41.291 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:41.291 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:41.549 { 01:11:41.549 "auth": { 01:11:41.549 "dhgroup": "ffdhe2048", 01:11:41.549 "digest": "sha256", 01:11:41.549 "state": "completed" 01:11:41.549 }, 01:11:41.549 "cntlid": 13, 01:11:41.549 "listen_address": { 01:11:41.549 "adrfam": "IPv4", 01:11:41.549 "traddr": "10.0.0.2", 01:11:41.549 "trsvcid": "4420", 01:11:41.549 "trtype": "TCP" 01:11:41.549 }, 01:11:41.549 "peer_address": { 01:11:41.549 "adrfam": "IPv4", 01:11:41.549 "traddr": "10.0.0.1", 01:11:41.549 "trsvcid": "32804", 01:11:41.549 "trtype": "TCP" 01:11:41.549 }, 01:11:41.549 "qid": 0, 01:11:41.549 "state": "enabled", 01:11:41.549 "thread": "nvmf_tgt_poll_group_000" 01:11:41.549 } 01:11:41.549 ]' 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:41.549 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:41.806 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:42.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:42.738 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:11:42.996 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:11:43.254 01:11:43.255 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:43.255 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:43.255 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:43.512 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:43.512 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:43.512 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:43.512 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:43.512 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:43.512 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:43.512 { 01:11:43.512 "auth": { 01:11:43.512 "dhgroup": "ffdhe2048", 01:11:43.512 "digest": "sha256", 01:11:43.512 "state": "completed" 01:11:43.512 }, 01:11:43.512 "cntlid": 15, 01:11:43.512 "listen_address": { 01:11:43.512 "adrfam": "IPv4", 01:11:43.512 "traddr": "10.0.0.2", 01:11:43.512 "trsvcid": "4420", 01:11:43.512 "trtype": "TCP" 01:11:43.512 }, 01:11:43.512 "peer_address": { 01:11:43.512 "adrfam": "IPv4", 01:11:43.512 "traddr": "10.0.0.1", 01:11:43.512 "trsvcid": "32836", 01:11:43.512 "trtype": "TCP" 01:11:43.512 }, 01:11:43.512 "qid": 0, 01:11:43.512 "state": "enabled", 01:11:43.512 "thread": "nvmf_tgt_poll_group_000" 01:11:43.512 } 01:11:43.512 ]' 01:11:43.512 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:43.770 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:43.770 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:43.770 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:11:43.770 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:43.770 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:43.770 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:43.770 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:44.028 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:44.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:44.605 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:45.171 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:45.429 01:11:45.429 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:45.429 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:45.429 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:45.687 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:45.687 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:45.687 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:45.687 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:45.687 11:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:45.687 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:45.687 { 01:11:45.687 "auth": { 01:11:45.687 "dhgroup": "ffdhe3072", 01:11:45.687 "digest": "sha256", 01:11:45.687 "state": "completed" 01:11:45.687 }, 01:11:45.687 "cntlid": 17, 01:11:45.687 "listen_address": { 01:11:45.687 "adrfam": "IPv4", 01:11:45.687 "traddr": "10.0.0.2", 01:11:45.687 "trsvcid": "4420", 01:11:45.687 "trtype": "TCP" 01:11:45.687 }, 01:11:45.687 "peer_address": { 01:11:45.687 "adrfam": "IPv4", 01:11:45.687 "traddr": "10.0.0.1", 01:11:45.688 "trsvcid": "32854", 01:11:45.688 "trtype": "TCP" 01:11:45.688 }, 01:11:45.688 "qid": 0, 01:11:45.688 "state": "enabled", 01:11:45.688 "thread": "nvmf_tgt_poll_group_000" 01:11:45.688 } 01:11:45.688 ]' 01:11:45.688 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:45.688 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:45.688 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:45.946 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:11:45.946 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:45.946 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:45.946 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:45.946 11:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:46.205 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:46.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:46.778 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:47.077 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:47.642 01:11:47.642 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:47.642 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:47.642 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:47.901 { 01:11:47.901 "auth": { 01:11:47.901 "dhgroup": "ffdhe3072", 01:11:47.901 "digest": "sha256", 01:11:47.901 "state": "completed" 01:11:47.901 }, 01:11:47.901 "cntlid": 19, 01:11:47.901 "listen_address": { 01:11:47.901 "adrfam": "IPv4", 01:11:47.901 "traddr": "10.0.0.2", 01:11:47.901 "trsvcid": "4420", 01:11:47.901 "trtype": "TCP" 01:11:47.901 }, 01:11:47.901 "peer_address": { 01:11:47.901 "adrfam": "IPv4", 01:11:47.901 "traddr": "10.0.0.1", 01:11:47.901 "trsvcid": "32890", 01:11:47.901 "trtype": "TCP" 01:11:47.901 }, 01:11:47.901 "qid": 0, 01:11:47.901 "state": "enabled", 01:11:47.901 "thread": "nvmf_tgt_poll_group_000" 01:11:47.901 } 01:11:47.901 ]' 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:47.901 11:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:47.901 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:11:47.901 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:47.901 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:47.901 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:47.901 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:48.465 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:49.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:49.034 11:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:49.292 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:49.550 01:11:49.550 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:49.550 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:49.550 11:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:50.116 { 01:11:50.116 "auth": { 01:11:50.116 "dhgroup": "ffdhe3072", 01:11:50.116 "digest": "sha256", 01:11:50.116 "state": "completed" 01:11:50.116 }, 01:11:50.116 "cntlid": 21, 01:11:50.116 "listen_address": { 01:11:50.116 "adrfam": "IPv4", 01:11:50.116 "traddr": "10.0.0.2", 01:11:50.116 "trsvcid": "4420", 01:11:50.116 "trtype": "TCP" 01:11:50.116 }, 01:11:50.116 "peer_address": { 01:11:50.116 "adrfam": "IPv4", 01:11:50.116 "traddr": "10.0.0.1", 01:11:50.116 "trsvcid": "58982", 01:11:50.116 "trtype": "TCP" 01:11:50.116 }, 01:11:50.116 "qid": 0, 01:11:50.116 "state": "enabled", 01:11:50.116 "thread": "nvmf_tgt_poll_group_000" 01:11:50.116 } 01:11:50.116 ]' 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:50.116 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:50.374 11:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:11:50.939 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:50.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:50.939 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:50.939 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:50.939 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:50.939 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:50.939 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:50.940 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:50.940 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:11:51.197 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:11:51.762 01:11:51.762 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:51.762 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:51.762 11:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:52.020 { 01:11:52.020 "auth": { 01:11:52.020 "dhgroup": "ffdhe3072", 01:11:52.020 "digest": "sha256", 01:11:52.020 "state": "completed" 01:11:52.020 }, 01:11:52.020 "cntlid": 23, 01:11:52.020 "listen_address": { 01:11:52.020 "adrfam": "IPv4", 01:11:52.020 "traddr": "10.0.0.2", 01:11:52.020 "trsvcid": "4420", 01:11:52.020 "trtype": "TCP" 01:11:52.020 }, 01:11:52.020 "peer_address": { 01:11:52.020 "adrfam": "IPv4", 01:11:52.020 "traddr": "10.0.0.1", 01:11:52.020 "trsvcid": "59010", 01:11:52.020 "trtype": "TCP" 01:11:52.020 }, 01:11:52.020 "qid": 0, 01:11:52.020 "state": "enabled", 01:11:52.020 "thread": "nvmf_tgt_poll_group_000" 01:11:52.020 } 01:11:52.020 ]' 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:52.020 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:52.021 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:52.021 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:11:52.021 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:52.021 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:52.021 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:52.021 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:52.586 11:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:53.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:53.151 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:53.409 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:53.666 01:11:53.924 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:53.924 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:53.924 11:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:53.924 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:53.924 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:53.924 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:53.924 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:53.924 11:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:53.924 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:53.924 { 01:11:53.924 "auth": { 01:11:53.924 "dhgroup": "ffdhe4096", 01:11:53.924 "digest": "sha256", 01:11:53.924 "state": "completed" 01:11:53.924 }, 01:11:53.924 "cntlid": 25, 01:11:53.924 "listen_address": { 01:11:53.924 "adrfam": "IPv4", 01:11:53.924 "traddr": "10.0.0.2", 01:11:53.924 "trsvcid": "4420", 01:11:53.924 "trtype": "TCP" 01:11:53.924 }, 01:11:53.924 "peer_address": { 01:11:53.924 "adrfam": "IPv4", 01:11:53.924 "traddr": "10.0.0.1", 01:11:53.924 "trsvcid": "59044", 01:11:53.924 "trtype": "TCP" 01:11:53.924 }, 01:11:53.924 "qid": 0, 01:11:53.924 "state": "enabled", 01:11:53.924 "thread": "nvmf_tgt_poll_group_000" 01:11:53.924 } 01:11:53.924 ]' 01:11:53.924 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:54.182 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:54.182 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:54.182 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:11:54.182 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:54.182 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:54.182 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:54.182 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:54.440 11:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:55.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:55.007 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:55.266 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:55.267 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:55.834 01:11:55.834 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:55.834 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:55.834 11:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:56.093 { 01:11:56.093 "auth": { 01:11:56.093 "dhgroup": "ffdhe4096", 01:11:56.093 "digest": "sha256", 01:11:56.093 "state": "completed" 01:11:56.093 }, 01:11:56.093 "cntlid": 27, 01:11:56.093 "listen_address": { 01:11:56.093 "adrfam": "IPv4", 01:11:56.093 "traddr": "10.0.0.2", 01:11:56.093 "trsvcid": "4420", 01:11:56.093 "trtype": "TCP" 01:11:56.093 }, 01:11:56.093 "peer_address": { 01:11:56.093 "adrfam": "IPv4", 01:11:56.093 "traddr": "10.0.0.1", 01:11:56.093 "trsvcid": "59068", 01:11:56.093 "trtype": "TCP" 01:11:56.093 }, 01:11:56.093 "qid": 0, 01:11:56.093 "state": "enabled", 01:11:56.093 "thread": "nvmf_tgt_poll_group_000" 01:11:56.093 } 01:11:56.093 ]' 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:56.093 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:56.659 11:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:57.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:57.225 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:57.482 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:58.048 01:11:58.048 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:11:58.048 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:11:58.048 11:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:11:58.305 { 01:11:58.305 "auth": { 01:11:58.305 "dhgroup": "ffdhe4096", 01:11:58.305 "digest": "sha256", 01:11:58.305 "state": "completed" 01:11:58.305 }, 01:11:58.305 "cntlid": 29, 01:11:58.305 "listen_address": { 01:11:58.305 "adrfam": "IPv4", 01:11:58.305 "traddr": "10.0.0.2", 01:11:58.305 "trsvcid": "4420", 01:11:58.305 "trtype": "TCP" 01:11:58.305 }, 01:11:58.305 "peer_address": { 01:11:58.305 "adrfam": "IPv4", 01:11:58.305 "traddr": "10.0.0.1", 01:11:58.305 "trsvcid": "59102", 01:11:58.305 "trtype": "TCP" 01:11:58.305 }, 01:11:58.305 "qid": 0, 01:11:58.305 "state": "enabled", 01:11:58.305 "thread": "nvmf_tgt_poll_group_000" 01:11:58.305 } 01:11:58.305 ]' 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:11:58.305 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:11:58.563 11:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:11:59.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:11:59.495 11:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:00.075 01:12:00.075 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:00.075 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:00.075 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:00.333 { 01:12:00.333 "auth": { 01:12:00.333 "dhgroup": "ffdhe4096", 01:12:00.333 "digest": "sha256", 01:12:00.333 "state": "completed" 01:12:00.333 }, 01:12:00.333 "cntlid": 31, 01:12:00.333 "listen_address": { 01:12:00.333 "adrfam": "IPv4", 01:12:00.333 "traddr": "10.0.0.2", 01:12:00.333 "trsvcid": "4420", 01:12:00.333 "trtype": "TCP" 01:12:00.333 }, 01:12:00.333 "peer_address": { 01:12:00.333 "adrfam": "IPv4", 01:12:00.333 "traddr": "10.0.0.1", 01:12:00.333 "trsvcid": "56248", 01:12:00.333 "trtype": "TCP" 01:12:00.333 }, 01:12:00.333 "qid": 0, 01:12:00.333 "state": "enabled", 01:12:00.333 "thread": "nvmf_tgt_poll_group_000" 01:12:00.333 } 01:12:00.333 ]' 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:00.333 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:00.899 11:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:01.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:01.465 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:01.724 11:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:02.291 01:12:02.291 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:02.291 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:02.291 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:02.574 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:02.574 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:02.574 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:02.574 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:02.574 11:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:02.574 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:02.574 { 01:12:02.574 "auth": { 01:12:02.574 "dhgroup": "ffdhe6144", 01:12:02.574 "digest": "sha256", 01:12:02.574 "state": "completed" 01:12:02.574 }, 01:12:02.574 "cntlid": 33, 01:12:02.574 "listen_address": { 01:12:02.574 "adrfam": "IPv4", 01:12:02.574 "traddr": "10.0.0.2", 01:12:02.574 "trsvcid": "4420", 01:12:02.574 "trtype": "TCP" 01:12:02.574 }, 01:12:02.574 "peer_address": { 01:12:02.574 "adrfam": "IPv4", 01:12:02.574 "traddr": "10.0.0.1", 01:12:02.574 "trsvcid": "56274", 01:12:02.574 "trtype": "TCP" 01:12:02.574 }, 01:12:02.574 "qid": 0, 01:12:02.574 "state": "enabled", 01:12:02.574 "thread": "nvmf_tgt_poll_group_000" 01:12:02.575 } 01:12:02.575 ]' 01:12:02.575 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:02.575 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:02.575 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:02.575 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:02.575 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:02.848 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:02.848 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:02.848 11:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:03.106 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:03.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:03.673 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:03.932 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 01:12:03.932 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:03.932 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:03.932 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:03.932 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:12:03.932 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:03.932 11:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:03.933 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:03.933 11:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:03.933 11:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:03.933 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:03.933 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:04.501 01:12:04.501 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:04.501 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:04.501 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:04.784 { 01:12:04.784 "auth": { 01:12:04.784 "dhgroup": "ffdhe6144", 01:12:04.784 "digest": "sha256", 01:12:04.784 "state": "completed" 01:12:04.784 }, 01:12:04.784 "cntlid": 35, 01:12:04.784 "listen_address": { 01:12:04.784 "adrfam": "IPv4", 01:12:04.784 "traddr": "10.0.0.2", 01:12:04.784 "trsvcid": "4420", 01:12:04.784 "trtype": "TCP" 01:12:04.784 }, 01:12:04.784 "peer_address": { 01:12:04.784 "adrfam": "IPv4", 01:12:04.784 "traddr": "10.0.0.1", 01:12:04.784 "trsvcid": "56300", 01:12:04.784 "trtype": "TCP" 01:12:04.784 }, 01:12:04.784 "qid": 0, 01:12:04.784 "state": "enabled", 01:12:04.784 "thread": "nvmf_tgt_poll_group_000" 01:12:04.784 } 01:12:04.784 ]' 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:04.784 11:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:05.351 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:05.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:05.919 11:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:06.177 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:06.436 01:12:06.695 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:06.695 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:06.695 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:06.954 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:06.954 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:06.954 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:06.954 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:06.954 11:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:06.954 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:06.954 { 01:12:06.954 "auth": { 01:12:06.954 "dhgroup": "ffdhe6144", 01:12:06.954 "digest": "sha256", 01:12:06.954 "state": "completed" 01:12:06.954 }, 01:12:06.954 "cntlid": 37, 01:12:06.954 "listen_address": { 01:12:06.954 "adrfam": "IPv4", 01:12:06.954 "traddr": "10.0.0.2", 01:12:06.954 "trsvcid": "4420", 01:12:06.954 "trtype": "TCP" 01:12:06.954 }, 01:12:06.954 "peer_address": { 01:12:06.954 "adrfam": "IPv4", 01:12:06.954 "traddr": "10.0.0.1", 01:12:06.954 "trsvcid": "56322", 01:12:06.954 "trtype": "TCP" 01:12:06.954 }, 01:12:06.954 "qid": 0, 01:12:06.954 "state": "enabled", 01:12:06.954 "thread": "nvmf_tgt_poll_group_000" 01:12:06.954 } 01:12:06.954 ]' 01:12:06.954 11:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:06.954 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:06.954 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:06.954 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:06.954 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:06.954 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:06.954 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:06.954 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:07.520 11:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:08.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:08.084 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:08.341 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:08.905 01:12:08.905 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:08.905 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:08.905 11:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:09.163 { 01:12:09.163 "auth": { 01:12:09.163 "dhgroup": "ffdhe6144", 01:12:09.163 "digest": "sha256", 01:12:09.163 "state": "completed" 01:12:09.163 }, 01:12:09.163 "cntlid": 39, 01:12:09.163 "listen_address": { 01:12:09.163 "adrfam": "IPv4", 01:12:09.163 "traddr": "10.0.0.2", 01:12:09.163 "trsvcid": "4420", 01:12:09.163 "trtype": "TCP" 01:12:09.163 }, 01:12:09.163 "peer_address": { 01:12:09.163 "adrfam": "IPv4", 01:12:09.163 "traddr": "10.0.0.1", 01:12:09.163 "trsvcid": "56358", 01:12:09.163 "trtype": "TCP" 01:12:09.163 }, 01:12:09.163 "qid": 0, 01:12:09.163 "state": "enabled", 01:12:09.163 "thread": "nvmf_tgt_poll_group_000" 01:12:09.163 } 01:12:09.163 ]' 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:09.163 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:09.421 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:09.421 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:09.421 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:09.679 11:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:10.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:10.245 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:10.503 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 01:12:10.503 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:10.504 11:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:11.438 01:12:11.438 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:11.438 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:11.438 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:11.438 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:11.438 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:11.438 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:11.438 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:11.696 { 01:12:11.696 "auth": { 01:12:11.696 "dhgroup": "ffdhe8192", 01:12:11.696 "digest": "sha256", 01:12:11.696 "state": "completed" 01:12:11.696 }, 01:12:11.696 "cntlid": 41, 01:12:11.696 "listen_address": { 01:12:11.696 "adrfam": "IPv4", 01:12:11.696 "traddr": "10.0.0.2", 01:12:11.696 "trsvcid": "4420", 01:12:11.696 "trtype": "TCP" 01:12:11.696 }, 01:12:11.696 "peer_address": { 01:12:11.696 "adrfam": "IPv4", 01:12:11.696 "traddr": "10.0.0.1", 01:12:11.696 "trsvcid": "47740", 01:12:11.696 "trtype": "TCP" 01:12:11.696 }, 01:12:11.696 "qid": 0, 01:12:11.696 "state": "enabled", 01:12:11.696 "thread": "nvmf_tgt_poll_group_000" 01:12:11.696 } 01:12:11.696 ]' 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:11.696 11:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:11.954 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:12:12.541 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:12.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:12.800 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:12.800 11:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:12.800 11:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:12.800 11:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:12.800 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:12.800 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:12.800 11:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:13.058 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:13.624 01:12:13.624 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:13.624 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:13.624 11:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:13.881 { 01:12:13.881 "auth": { 01:12:13.881 "dhgroup": "ffdhe8192", 01:12:13.881 "digest": "sha256", 01:12:13.881 "state": "completed" 01:12:13.881 }, 01:12:13.881 "cntlid": 43, 01:12:13.881 "listen_address": { 01:12:13.881 "adrfam": "IPv4", 01:12:13.881 "traddr": "10.0.0.2", 01:12:13.881 "trsvcid": "4420", 01:12:13.881 "trtype": "TCP" 01:12:13.881 }, 01:12:13.881 "peer_address": { 01:12:13.881 "adrfam": "IPv4", 01:12:13.881 "traddr": "10.0.0.1", 01:12:13.881 "trsvcid": "47764", 01:12:13.881 "trtype": "TCP" 01:12:13.881 }, 01:12:13.881 "qid": 0, 01:12:13.881 "state": "enabled", 01:12:13.881 "thread": "nvmf_tgt_poll_group_000" 01:12:13.881 } 01:12:13.881 ]' 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:13.881 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:14.138 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:12:14.138 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:14.138 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:14.138 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:14.138 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:14.396 11:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:14.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:14.961 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:15.233 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:15.799 01:12:15.799 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:15.799 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:15.799 11:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:16.057 { 01:12:16.057 "auth": { 01:12:16.057 "dhgroup": "ffdhe8192", 01:12:16.057 "digest": "sha256", 01:12:16.057 "state": "completed" 01:12:16.057 }, 01:12:16.057 "cntlid": 45, 01:12:16.057 "listen_address": { 01:12:16.057 "adrfam": "IPv4", 01:12:16.057 "traddr": "10.0.0.2", 01:12:16.057 "trsvcid": "4420", 01:12:16.057 "trtype": "TCP" 01:12:16.057 }, 01:12:16.057 "peer_address": { 01:12:16.057 "adrfam": "IPv4", 01:12:16.057 "traddr": "10.0.0.1", 01:12:16.057 "trsvcid": "47794", 01:12:16.057 "trtype": "TCP" 01:12:16.057 }, 01:12:16.057 "qid": 0, 01:12:16.057 "state": "enabled", 01:12:16.057 "thread": "nvmf_tgt_poll_group_000" 01:12:16.057 } 01:12:16.057 ]' 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:16.057 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:16.315 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:12:16.315 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:16.315 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:16.315 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:16.315 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:16.630 11:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:17.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:17.223 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:17.482 11:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:18.048 01:12:18.048 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:18.048 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:18.048 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:18.306 { 01:12:18.306 "auth": { 01:12:18.306 "dhgroup": "ffdhe8192", 01:12:18.306 "digest": "sha256", 01:12:18.306 "state": "completed" 01:12:18.306 }, 01:12:18.306 "cntlid": 47, 01:12:18.306 "listen_address": { 01:12:18.306 "adrfam": "IPv4", 01:12:18.306 "traddr": "10.0.0.2", 01:12:18.306 "trsvcid": "4420", 01:12:18.306 "trtype": "TCP" 01:12:18.306 }, 01:12:18.306 "peer_address": { 01:12:18.306 "adrfam": "IPv4", 01:12:18.306 "traddr": "10.0.0.1", 01:12:18.306 "trsvcid": "47826", 01:12:18.306 "trtype": "TCP" 01:12:18.306 }, 01:12:18.306 "qid": 0, 01:12:18.306 "state": "enabled", 01:12:18.306 "thread": "nvmf_tgt_poll_group_000" 01:12:18.306 } 01:12:18.306 ]' 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:12:18.306 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:18.563 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:12:18.563 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:18.563 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:18.563 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:18.563 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:18.820 11:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:19.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:19.386 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:19.952 11:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:20.211 01:12:20.211 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:20.211 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:20.211 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:20.470 { 01:12:20.470 "auth": { 01:12:20.470 "dhgroup": "null", 01:12:20.470 "digest": "sha384", 01:12:20.470 "state": "completed" 01:12:20.470 }, 01:12:20.470 "cntlid": 49, 01:12:20.470 "listen_address": { 01:12:20.470 "adrfam": "IPv4", 01:12:20.470 "traddr": "10.0.0.2", 01:12:20.470 "trsvcid": "4420", 01:12:20.470 "trtype": "TCP" 01:12:20.470 }, 01:12:20.470 "peer_address": { 01:12:20.470 "adrfam": "IPv4", 01:12:20.470 "traddr": "10.0.0.1", 01:12:20.470 "trsvcid": "39388", 01:12:20.470 "trtype": "TCP" 01:12:20.470 }, 01:12:20.470 "qid": 0, 01:12:20.470 "state": "enabled", 01:12:20.470 "thread": "nvmf_tgt_poll_group_000" 01:12:20.470 } 01:12:20.470 ]' 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:12:20.470 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:20.729 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:20.729 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:20.729 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:20.986 11:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:21.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:21.550 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:21.808 11:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:22.066 01:12:22.067 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:22.067 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:22.067 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:22.325 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:22.325 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:22.325 11:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:22.325 11:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:22.325 11:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:22.325 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:22.325 { 01:12:22.325 "auth": { 01:12:22.325 "dhgroup": "null", 01:12:22.325 "digest": "sha384", 01:12:22.325 "state": "completed" 01:12:22.325 }, 01:12:22.325 "cntlid": 51, 01:12:22.325 "listen_address": { 01:12:22.325 "adrfam": "IPv4", 01:12:22.325 "traddr": "10.0.0.2", 01:12:22.325 "trsvcid": "4420", 01:12:22.325 "trtype": "TCP" 01:12:22.325 }, 01:12:22.325 "peer_address": { 01:12:22.325 "adrfam": "IPv4", 01:12:22.325 "traddr": "10.0.0.1", 01:12:22.325 "trsvcid": "39426", 01:12:22.325 "trtype": "TCP" 01:12:22.325 }, 01:12:22.325 "qid": 0, 01:12:22.325 "state": "enabled", 01:12:22.325 "thread": "nvmf_tgt_poll_group_000" 01:12:22.325 } 01:12:22.325 ]' 01:12:22.325 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:22.582 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:22.582 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:22.582 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:12:22.582 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:22.582 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:22.582 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:22.582 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:22.840 11:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:12:23.405 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:23.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:23.405 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:23.406 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:23.406 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:23.406 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:23.406 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:23.406 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:23.406 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:23.662 11:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:23.918 01:12:23.919 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:23.919 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:23.919 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:24.176 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:24.176 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:24.176 11:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:24.176 11:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:24.176 11:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:24.176 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:24.176 { 01:12:24.176 "auth": { 01:12:24.176 "dhgroup": "null", 01:12:24.176 "digest": "sha384", 01:12:24.176 "state": "completed" 01:12:24.176 }, 01:12:24.176 "cntlid": 53, 01:12:24.176 "listen_address": { 01:12:24.176 "adrfam": "IPv4", 01:12:24.176 "traddr": "10.0.0.2", 01:12:24.176 "trsvcid": "4420", 01:12:24.176 "trtype": "TCP" 01:12:24.176 }, 01:12:24.176 "peer_address": { 01:12:24.176 "adrfam": "IPv4", 01:12:24.176 "traddr": "10.0.0.1", 01:12:24.176 "trsvcid": "39444", 01:12:24.176 "trtype": "TCP" 01:12:24.176 }, 01:12:24.176 "qid": 0, 01:12:24.176 "state": "enabled", 01:12:24.176 "thread": "nvmf_tgt_poll_group_000" 01:12:24.176 } 01:12:24.176 ]' 01:12:24.176 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:24.433 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:24.433 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:24.433 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:12:24.433 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:24.433 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:24.433 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:24.433 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:24.691 11:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:25.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:25.626 11:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:26.192 01:12:26.192 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:26.192 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:26.192 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:26.192 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:26.192 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:26.192 11:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:26.192 11:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:26.450 11:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:26.450 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:26.450 { 01:12:26.450 "auth": { 01:12:26.450 "dhgroup": "null", 01:12:26.450 "digest": "sha384", 01:12:26.450 "state": "completed" 01:12:26.450 }, 01:12:26.450 "cntlid": 55, 01:12:26.450 "listen_address": { 01:12:26.450 "adrfam": "IPv4", 01:12:26.450 "traddr": "10.0.0.2", 01:12:26.450 "trsvcid": "4420", 01:12:26.450 "trtype": "TCP" 01:12:26.450 }, 01:12:26.451 "peer_address": { 01:12:26.451 "adrfam": "IPv4", 01:12:26.451 "traddr": "10.0.0.1", 01:12:26.451 "trsvcid": "39470", 01:12:26.451 "trtype": "TCP" 01:12:26.451 }, 01:12:26.451 "qid": 0, 01:12:26.451 "state": "enabled", 01:12:26.451 "thread": "nvmf_tgt_poll_group_000" 01:12:26.451 } 01:12:26.451 ]' 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:26.451 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:26.709 11:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:27.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:27.275 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:27.534 11:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:28.101 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:28.101 { 01:12:28.101 "auth": { 01:12:28.101 "dhgroup": "ffdhe2048", 01:12:28.101 "digest": "sha384", 01:12:28.101 "state": "completed" 01:12:28.101 }, 01:12:28.101 "cntlid": 57, 01:12:28.101 "listen_address": { 01:12:28.101 "adrfam": "IPv4", 01:12:28.101 "traddr": "10.0.0.2", 01:12:28.101 "trsvcid": "4420", 01:12:28.101 "trtype": "TCP" 01:12:28.101 }, 01:12:28.101 "peer_address": { 01:12:28.101 "adrfam": "IPv4", 01:12:28.101 "traddr": "10.0.0.1", 01:12:28.101 "trsvcid": "39488", 01:12:28.101 "trtype": "TCP" 01:12:28.101 }, 01:12:28.101 "qid": 0, 01:12:28.101 "state": "enabled", 01:12:28.101 "thread": "nvmf_tgt_poll_group_000" 01:12:28.101 } 01:12:28.101 ]' 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:28.101 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:28.359 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:12:28.359 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:28.359 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:28.359 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:28.359 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:28.617 11:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:29.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:29.186 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:29.753 11:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:30.011 01:12:30.011 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:30.011 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:30.011 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:30.269 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:30.269 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:30.269 11:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:30.269 11:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:30.269 11:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:30.269 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:30.269 { 01:12:30.269 "auth": { 01:12:30.269 "dhgroup": "ffdhe2048", 01:12:30.269 "digest": "sha384", 01:12:30.269 "state": "completed" 01:12:30.269 }, 01:12:30.269 "cntlid": 59, 01:12:30.269 "listen_address": { 01:12:30.269 "adrfam": "IPv4", 01:12:30.269 "traddr": "10.0.0.2", 01:12:30.269 "trsvcid": "4420", 01:12:30.269 "trtype": "TCP" 01:12:30.269 }, 01:12:30.269 "peer_address": { 01:12:30.269 "adrfam": "IPv4", 01:12:30.269 "traddr": "10.0.0.1", 01:12:30.269 "trsvcid": "49222", 01:12:30.269 "trtype": "TCP" 01:12:30.269 }, 01:12:30.269 "qid": 0, 01:12:30.269 "state": "enabled", 01:12:30.270 "thread": "nvmf_tgt_poll_group_000" 01:12:30.270 } 01:12:30.270 ]' 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:30.270 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:30.528 11:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:31.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:31.097 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:31.388 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:31.953 01:12:31.953 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:31.953 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:31.953 11:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:32.211 { 01:12:32.211 "auth": { 01:12:32.211 "dhgroup": "ffdhe2048", 01:12:32.211 "digest": "sha384", 01:12:32.211 "state": "completed" 01:12:32.211 }, 01:12:32.211 "cntlid": 61, 01:12:32.211 "listen_address": { 01:12:32.211 "adrfam": "IPv4", 01:12:32.211 "traddr": "10.0.0.2", 01:12:32.211 "trsvcid": "4420", 01:12:32.211 "trtype": "TCP" 01:12:32.211 }, 01:12:32.211 "peer_address": { 01:12:32.211 "adrfam": "IPv4", 01:12:32.211 "traddr": "10.0.0.1", 01:12:32.211 "trsvcid": "49246", 01:12:32.211 "trtype": "TCP" 01:12:32.211 }, 01:12:32.211 "qid": 0, 01:12:32.211 "state": "enabled", 01:12:32.211 "thread": "nvmf_tgt_poll_group_000" 01:12:32.211 } 01:12:32.211 ]' 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:32.211 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:32.469 11:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:33.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:33.034 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:33.291 11:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:33.292 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:33.292 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:33.858 01:12:33.858 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:33.858 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:33.858 11:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:34.116 { 01:12:34.116 "auth": { 01:12:34.116 "dhgroup": "ffdhe2048", 01:12:34.116 "digest": "sha384", 01:12:34.116 "state": "completed" 01:12:34.116 }, 01:12:34.116 "cntlid": 63, 01:12:34.116 "listen_address": { 01:12:34.116 "adrfam": "IPv4", 01:12:34.116 "traddr": "10.0.0.2", 01:12:34.116 "trsvcid": "4420", 01:12:34.116 "trtype": "TCP" 01:12:34.116 }, 01:12:34.116 "peer_address": { 01:12:34.116 "adrfam": "IPv4", 01:12:34.116 "traddr": "10.0.0.1", 01:12:34.116 "trsvcid": "49286", 01:12:34.116 "trtype": "TCP" 01:12:34.116 }, 01:12:34.116 "qid": 0, 01:12:34.116 "state": "enabled", 01:12:34.116 "thread": "nvmf_tgt_poll_group_000" 01:12:34.116 } 01:12:34.116 ]' 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:34.116 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:34.374 11:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:35.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:35.308 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:35.873 01:12:35.873 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:35.873 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:35.873 11:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:36.131 { 01:12:36.131 "auth": { 01:12:36.131 "dhgroup": "ffdhe3072", 01:12:36.131 "digest": "sha384", 01:12:36.131 "state": "completed" 01:12:36.131 }, 01:12:36.131 "cntlid": 65, 01:12:36.131 "listen_address": { 01:12:36.131 "adrfam": "IPv4", 01:12:36.131 "traddr": "10.0.0.2", 01:12:36.131 "trsvcid": "4420", 01:12:36.131 "trtype": "TCP" 01:12:36.131 }, 01:12:36.131 "peer_address": { 01:12:36.131 "adrfam": "IPv4", 01:12:36.131 "traddr": "10.0.0.1", 01:12:36.131 "trsvcid": "49306", 01:12:36.131 "trtype": "TCP" 01:12:36.131 }, 01:12:36.131 "qid": 0, 01:12:36.131 "state": "enabled", 01:12:36.131 "thread": "nvmf_tgt_poll_group_000" 01:12:36.131 } 01:12:36.131 ]' 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:12:36.131 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:36.398 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:36.398 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:36.399 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:36.399 11:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:37.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:37.333 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:37.901 01:12:37.901 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:37.901 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:37.901 11:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:38.160 { 01:12:38.160 "auth": { 01:12:38.160 "dhgroup": "ffdhe3072", 01:12:38.160 "digest": "sha384", 01:12:38.160 "state": "completed" 01:12:38.160 }, 01:12:38.160 "cntlid": 67, 01:12:38.160 "listen_address": { 01:12:38.160 "adrfam": "IPv4", 01:12:38.160 "traddr": "10.0.0.2", 01:12:38.160 "trsvcid": "4420", 01:12:38.160 "trtype": "TCP" 01:12:38.160 }, 01:12:38.160 "peer_address": { 01:12:38.160 "adrfam": "IPv4", 01:12:38.160 "traddr": "10.0.0.1", 01:12:38.160 "trsvcid": "49332", 01:12:38.160 "trtype": "TCP" 01:12:38.160 }, 01:12:38.160 "qid": 0, 01:12:38.160 "state": "enabled", 01:12:38.160 "thread": "nvmf_tgt_poll_group_000" 01:12:38.160 } 01:12:38.160 ]' 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:12:38.160 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:38.418 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:38.418 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:38.418 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:38.676 11:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:39.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:39.241 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:39.499 11:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:40.063 01:12:40.064 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:40.064 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:40.064 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:40.321 { 01:12:40.321 "auth": { 01:12:40.321 "dhgroup": "ffdhe3072", 01:12:40.321 "digest": "sha384", 01:12:40.321 "state": "completed" 01:12:40.321 }, 01:12:40.321 "cntlid": 69, 01:12:40.321 "listen_address": { 01:12:40.321 "adrfam": "IPv4", 01:12:40.321 "traddr": "10.0.0.2", 01:12:40.321 "trsvcid": "4420", 01:12:40.321 "trtype": "TCP" 01:12:40.321 }, 01:12:40.321 "peer_address": { 01:12:40.321 "adrfam": "IPv4", 01:12:40.321 "traddr": "10.0.0.1", 01:12:40.321 "trsvcid": "50728", 01:12:40.321 "trtype": "TCP" 01:12:40.321 }, 01:12:40.321 "qid": 0, 01:12:40.321 "state": "enabled", 01:12:40.321 "thread": "nvmf_tgt_poll_group_000" 01:12:40.321 } 01:12:40.321 ]' 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:40.321 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:40.886 11:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:12:41.144 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:41.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:41.144 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:41.144 11:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:41.144 11:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:41.402 11:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:41.402 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:41.402 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:41.402 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:41.661 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:41.918 01:12:41.918 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:41.918 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:41.918 11:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:42.176 { 01:12:42.176 "auth": { 01:12:42.176 "dhgroup": "ffdhe3072", 01:12:42.176 "digest": "sha384", 01:12:42.176 "state": "completed" 01:12:42.176 }, 01:12:42.176 "cntlid": 71, 01:12:42.176 "listen_address": { 01:12:42.176 "adrfam": "IPv4", 01:12:42.176 "traddr": "10.0.0.2", 01:12:42.176 "trsvcid": "4420", 01:12:42.176 "trtype": "TCP" 01:12:42.176 }, 01:12:42.176 "peer_address": { 01:12:42.176 "adrfam": "IPv4", 01:12:42.176 "traddr": "10.0.0.1", 01:12:42.176 "trsvcid": "50758", 01:12:42.176 "trtype": "TCP" 01:12:42.176 }, 01:12:42.176 "qid": 0, 01:12:42.176 "state": "enabled", 01:12:42.176 "thread": "nvmf_tgt_poll_group_000" 01:12:42.176 } 01:12:42.176 ]' 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:12:42.176 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:42.434 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:42.434 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:42.434 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:42.699 11:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:43.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:43.264 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:43.522 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 01:12:43.522 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:43.522 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:43.522 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:12:43.522 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:12:43.522 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:43.523 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:43.523 11:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:43.523 11:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:43.523 11:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:43.523 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:43.523 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:43.781 01:12:43.781 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:43.781 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:43.781 11:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:44.347 { 01:12:44.347 "auth": { 01:12:44.347 "dhgroup": "ffdhe4096", 01:12:44.347 "digest": "sha384", 01:12:44.347 "state": "completed" 01:12:44.347 }, 01:12:44.347 "cntlid": 73, 01:12:44.347 "listen_address": { 01:12:44.347 "adrfam": "IPv4", 01:12:44.347 "traddr": "10.0.0.2", 01:12:44.347 "trsvcid": "4420", 01:12:44.347 "trtype": "TCP" 01:12:44.347 }, 01:12:44.347 "peer_address": { 01:12:44.347 "adrfam": "IPv4", 01:12:44.347 "traddr": "10.0.0.1", 01:12:44.347 "trsvcid": "50800", 01:12:44.347 "trtype": "TCP" 01:12:44.347 }, 01:12:44.347 "qid": 0, 01:12:44.347 "state": "enabled", 01:12:44.347 "thread": "nvmf_tgt_poll_group_000" 01:12:44.347 } 01:12:44.347 ]' 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:44.347 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:44.605 11:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:12:45.169 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:45.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:45.169 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:45.170 11:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:45.170 11:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:45.170 11:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:45.170 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:45.170 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:45.170 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:45.760 11:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:46.018 01:12:46.018 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:46.018 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:46.018 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:46.276 { 01:12:46.276 "auth": { 01:12:46.276 "dhgroup": "ffdhe4096", 01:12:46.276 "digest": "sha384", 01:12:46.276 "state": "completed" 01:12:46.276 }, 01:12:46.276 "cntlid": 75, 01:12:46.276 "listen_address": { 01:12:46.276 "adrfam": "IPv4", 01:12:46.276 "traddr": "10.0.0.2", 01:12:46.276 "trsvcid": "4420", 01:12:46.276 "trtype": "TCP" 01:12:46.276 }, 01:12:46.276 "peer_address": { 01:12:46.276 "adrfam": "IPv4", 01:12:46.276 "traddr": "10.0.0.1", 01:12:46.276 "trsvcid": "50816", 01:12:46.276 "trtype": "TCP" 01:12:46.276 }, 01:12:46.276 "qid": 0, 01:12:46.276 "state": "enabled", 01:12:46.276 "thread": "nvmf_tgt_poll_group_000" 01:12:46.276 } 01:12:46.276 ]' 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:46.276 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:46.533 11:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:47.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:47.467 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:47.468 11:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:48.032 01:12:48.032 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:48.032 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:48.032 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:48.289 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:48.290 { 01:12:48.290 "auth": { 01:12:48.290 "dhgroup": "ffdhe4096", 01:12:48.290 "digest": "sha384", 01:12:48.290 "state": "completed" 01:12:48.290 }, 01:12:48.290 "cntlid": 77, 01:12:48.290 "listen_address": { 01:12:48.290 "adrfam": "IPv4", 01:12:48.290 "traddr": "10.0.0.2", 01:12:48.290 "trsvcid": "4420", 01:12:48.290 "trtype": "TCP" 01:12:48.290 }, 01:12:48.290 "peer_address": { 01:12:48.290 "adrfam": "IPv4", 01:12:48.290 "traddr": "10.0.0.1", 01:12:48.290 "trsvcid": "50828", 01:12:48.290 "trtype": "TCP" 01:12:48.290 }, 01:12:48.290 "qid": 0, 01:12:48.290 "state": "enabled", 01:12:48.290 "thread": "nvmf_tgt_poll_group_000" 01:12:48.290 } 01:12:48.290 ]' 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:48.290 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:12:48.547 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:48.547 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:48.547 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:48.547 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:48.804 11:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:49.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:49.369 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:49.944 11:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:49.945 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:49.945 11:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:50.202 01:12:50.202 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:50.202 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:50.202 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:50.460 { 01:12:50.460 "auth": { 01:12:50.460 "dhgroup": "ffdhe4096", 01:12:50.460 "digest": "sha384", 01:12:50.460 "state": "completed" 01:12:50.460 }, 01:12:50.460 "cntlid": 79, 01:12:50.460 "listen_address": { 01:12:50.460 "adrfam": "IPv4", 01:12:50.460 "traddr": "10.0.0.2", 01:12:50.460 "trsvcid": "4420", 01:12:50.460 "trtype": "TCP" 01:12:50.460 }, 01:12:50.460 "peer_address": { 01:12:50.460 "adrfam": "IPv4", 01:12:50.460 "traddr": "10.0.0.1", 01:12:50.460 "trsvcid": "59944", 01:12:50.460 "trtype": "TCP" 01:12:50.460 }, 01:12:50.460 "qid": 0, 01:12:50.460 "state": "enabled", 01:12:50.460 "thread": "nvmf_tgt_poll_group_000" 01:12:50.460 } 01:12:50.460 ]' 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:50.460 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:50.718 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:12:50.718 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:50.718 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:50.718 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:50.718 11:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:50.976 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:51.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:51.541 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:51.799 11:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:12:52.365 01:12:52.365 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:52.365 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:52.365 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:52.624 { 01:12:52.624 "auth": { 01:12:52.624 "dhgroup": "ffdhe6144", 01:12:52.624 "digest": "sha384", 01:12:52.624 "state": "completed" 01:12:52.624 }, 01:12:52.624 "cntlid": 81, 01:12:52.624 "listen_address": { 01:12:52.624 "adrfam": "IPv4", 01:12:52.624 "traddr": "10.0.0.2", 01:12:52.624 "trsvcid": "4420", 01:12:52.624 "trtype": "TCP" 01:12:52.624 }, 01:12:52.624 "peer_address": { 01:12:52.624 "adrfam": "IPv4", 01:12:52.624 "traddr": "10.0.0.1", 01:12:52.624 "trsvcid": "59982", 01:12:52.624 "trtype": "TCP" 01:12:52.624 }, 01:12:52.624 "qid": 0, 01:12:52.624 "state": "enabled", 01:12:52.624 "thread": "nvmf_tgt_poll_group_000" 01:12:52.624 } 01:12:52.624 ]' 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:52.624 11:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:53.191 11:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:53.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:53.757 11:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:54.015 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:12:54.580 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:54.580 11:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:54.838 { 01:12:54.838 "auth": { 01:12:54.838 "dhgroup": "ffdhe6144", 01:12:54.838 "digest": "sha384", 01:12:54.838 "state": "completed" 01:12:54.838 }, 01:12:54.838 "cntlid": 83, 01:12:54.838 "listen_address": { 01:12:54.838 "adrfam": "IPv4", 01:12:54.838 "traddr": "10.0.0.2", 01:12:54.838 "trsvcid": "4420", 01:12:54.838 "trtype": "TCP" 01:12:54.838 }, 01:12:54.838 "peer_address": { 01:12:54.838 "adrfam": "IPv4", 01:12:54.838 "traddr": "10.0.0.1", 01:12:54.838 "trsvcid": "60014", 01:12:54.838 "trtype": "TCP" 01:12:54.838 }, 01:12:54.838 "qid": 0, 01:12:54.838 "state": "enabled", 01:12:54.838 "thread": "nvmf_tgt_poll_group_000" 01:12:54.838 } 01:12:54.838 ]' 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:54.838 11:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:55.096 11:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:55.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:55.663 11:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:55.922 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:12:56.485 01:12:56.485 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:56.485 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:56.485 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:56.743 { 01:12:56.743 "auth": { 01:12:56.743 "dhgroup": "ffdhe6144", 01:12:56.743 "digest": "sha384", 01:12:56.743 "state": "completed" 01:12:56.743 }, 01:12:56.743 "cntlid": 85, 01:12:56.743 "listen_address": { 01:12:56.743 "adrfam": "IPv4", 01:12:56.743 "traddr": "10.0.0.2", 01:12:56.743 "trsvcid": "4420", 01:12:56.743 "trtype": "TCP" 01:12:56.743 }, 01:12:56.743 "peer_address": { 01:12:56.743 "adrfam": "IPv4", 01:12:56.743 "traddr": "10.0.0.1", 01:12:56.743 "trsvcid": "60036", 01:12:56.743 "trtype": "TCP" 01:12:56.743 }, 01:12:56.743 "qid": 0, 01:12:56.743 "state": "enabled", 01:12:56.743 "thread": "nvmf_tgt_poll_group_000" 01:12:56.743 } 01:12:56.743 ]' 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:56.743 11:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:57.000 11:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:57.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:57.940 11:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:57.940 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:12:58.508 01:12:58.508 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:12:58.508 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:12:58.508 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:12:58.765 { 01:12:58.765 "auth": { 01:12:58.765 "dhgroup": "ffdhe6144", 01:12:58.765 "digest": "sha384", 01:12:58.765 "state": "completed" 01:12:58.765 }, 01:12:58.765 "cntlid": 87, 01:12:58.765 "listen_address": { 01:12:58.765 "adrfam": "IPv4", 01:12:58.765 "traddr": "10.0.0.2", 01:12:58.765 "trsvcid": "4420", 01:12:58.765 "trtype": "TCP" 01:12:58.765 }, 01:12:58.765 "peer_address": { 01:12:58.765 "adrfam": "IPv4", 01:12:58.765 "traddr": "10.0.0.1", 01:12:58.765 "trsvcid": "60060", 01:12:58.765 "trtype": "TCP" 01:12:58.765 }, 01:12:58.765 "qid": 0, 01:12:58.765 "state": "enabled", 01:12:58.765 "thread": "nvmf_tgt_poll_group_000" 01:12:58.765 } 01:12:58.765 ]' 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:12:58.765 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:12:59.023 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:12:59.023 11:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:12:59.023 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:12:59.023 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:12:59.023 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:12:59.281 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:12:59.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:12:59.848 11:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:00.120 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:00.121 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:00.742 01:13:00.742 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:00.742 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:00.742 11:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:00.999 { 01:13:00.999 "auth": { 01:13:00.999 "dhgroup": "ffdhe8192", 01:13:00.999 "digest": "sha384", 01:13:00.999 "state": "completed" 01:13:00.999 }, 01:13:00.999 "cntlid": 89, 01:13:00.999 "listen_address": { 01:13:00.999 "adrfam": "IPv4", 01:13:00.999 "traddr": "10.0.0.2", 01:13:00.999 "trsvcid": "4420", 01:13:00.999 "trtype": "TCP" 01:13:00.999 }, 01:13:00.999 "peer_address": { 01:13:00.999 "adrfam": "IPv4", 01:13:00.999 "traddr": "10.0.0.1", 01:13:00.999 "trsvcid": "55230", 01:13:00.999 "trtype": "TCP" 01:13:00.999 }, 01:13:00.999 "qid": 0, 01:13:00.999 "state": "enabled", 01:13:00.999 "thread": "nvmf_tgt_poll_group_000" 01:13:00.999 } 01:13:00.999 ]' 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:00.999 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:01.257 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:01.257 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:01.257 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:01.514 11:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:02.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:13:02.079 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:02.337 11:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:02.902 01:13:02.902 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:02.902 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:02.902 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:03.160 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:03.160 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:03.160 11:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:03.160 11:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:03.417 11:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:03.417 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:03.417 { 01:13:03.417 "auth": { 01:13:03.417 "dhgroup": "ffdhe8192", 01:13:03.417 "digest": "sha384", 01:13:03.417 "state": "completed" 01:13:03.417 }, 01:13:03.417 "cntlid": 91, 01:13:03.417 "listen_address": { 01:13:03.417 "adrfam": "IPv4", 01:13:03.417 "traddr": "10.0.0.2", 01:13:03.417 "trsvcid": "4420", 01:13:03.417 "trtype": "TCP" 01:13:03.417 }, 01:13:03.417 "peer_address": { 01:13:03.417 "adrfam": "IPv4", 01:13:03.417 "traddr": "10.0.0.1", 01:13:03.417 "trsvcid": "55274", 01:13:03.417 "trtype": "TCP" 01:13:03.417 }, 01:13:03.417 "qid": 0, 01:13:03.417 "state": "enabled", 01:13:03.417 "thread": "nvmf_tgt_poll_group_000" 01:13:03.417 } 01:13:03.417 ]' 01:13:03.417 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:03.417 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:13:03.418 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:03.418 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:03.418 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:03.418 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:03.418 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:03.418 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:03.676 11:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:04.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:04.631 11:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:05.196 01:13:05.454 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:05.454 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:05.454 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:05.711 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:05.711 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:05.711 11:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:05.711 11:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:05.711 11:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:05.711 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:05.711 { 01:13:05.711 "auth": { 01:13:05.711 "dhgroup": "ffdhe8192", 01:13:05.711 "digest": "sha384", 01:13:05.711 "state": "completed" 01:13:05.711 }, 01:13:05.711 "cntlid": 93, 01:13:05.711 "listen_address": { 01:13:05.711 "adrfam": "IPv4", 01:13:05.711 "traddr": "10.0.0.2", 01:13:05.711 "trsvcid": "4420", 01:13:05.711 "trtype": "TCP" 01:13:05.711 }, 01:13:05.711 "peer_address": { 01:13:05.711 "adrfam": "IPv4", 01:13:05.711 "traddr": "10.0.0.1", 01:13:05.711 "trsvcid": "55308", 01:13:05.711 "trtype": "TCP" 01:13:05.711 }, 01:13:05.711 "qid": 0, 01:13:05.711 "state": "enabled", 01:13:05.711 "thread": "nvmf_tgt_poll_group_000" 01:13:05.711 } 01:13:05.711 ]' 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:05.712 11:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:06.278 11:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:06.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:13:06.877 11:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:07.134 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:07.700 01:13:07.700 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:07.700 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:07.700 11:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:07.957 { 01:13:07.957 "auth": { 01:13:07.957 "dhgroup": "ffdhe8192", 01:13:07.957 "digest": "sha384", 01:13:07.957 "state": "completed" 01:13:07.957 }, 01:13:07.957 "cntlid": 95, 01:13:07.957 "listen_address": { 01:13:07.957 "adrfam": "IPv4", 01:13:07.957 "traddr": "10.0.0.2", 01:13:07.957 "trsvcid": "4420", 01:13:07.957 "trtype": "TCP" 01:13:07.957 }, 01:13:07.957 "peer_address": { 01:13:07.957 "adrfam": "IPv4", 01:13:07.957 "traddr": "10.0.0.1", 01:13:07.957 "trsvcid": "55336", 01:13:07.957 "trtype": "TCP" 01:13:07.957 }, 01:13:07.957 "qid": 0, 01:13:07.957 "state": "enabled", 01:13:07.957 "thread": "nvmf_tgt_poll_group_000" 01:13:07.957 } 01:13:07.957 ]' 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:13:07.957 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:08.214 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:08.214 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:08.214 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:08.214 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:08.214 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:08.471 11:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:09.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:09.037 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:09.296 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:09.875 01:13:09.875 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:09.875 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:09.875 11:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:10.133 { 01:13:10.133 "auth": { 01:13:10.133 "dhgroup": "null", 01:13:10.133 "digest": "sha512", 01:13:10.133 "state": "completed" 01:13:10.133 }, 01:13:10.133 "cntlid": 97, 01:13:10.133 "listen_address": { 01:13:10.133 "adrfam": "IPv4", 01:13:10.133 "traddr": "10.0.0.2", 01:13:10.133 "trsvcid": "4420", 01:13:10.133 "trtype": "TCP" 01:13:10.133 }, 01:13:10.133 "peer_address": { 01:13:10.133 "adrfam": "IPv4", 01:13:10.133 "traddr": "10.0.0.1", 01:13:10.133 "trsvcid": "37260", 01:13:10.133 "trtype": "TCP" 01:13:10.133 }, 01:13:10.133 "qid": 0, 01:13:10.133 "state": "enabled", 01:13:10.133 "thread": "nvmf_tgt_poll_group_000" 01:13:10.133 } 01:13:10.133 ]' 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:13:10.133 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:10.391 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:10.391 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:10.391 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:10.649 11:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:11.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:11.215 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:11.472 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:12.038 01:13:12.038 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:12.038 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:12.038 11:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:12.038 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:12.038 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:12.038 11:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:12.038 11:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:12.038 11:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:12.038 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:12.038 { 01:13:12.038 "auth": { 01:13:12.038 "dhgroup": "null", 01:13:12.038 "digest": "sha512", 01:13:12.038 "state": "completed" 01:13:12.038 }, 01:13:12.038 "cntlid": 99, 01:13:12.038 "listen_address": { 01:13:12.038 "adrfam": "IPv4", 01:13:12.038 "traddr": "10.0.0.2", 01:13:12.038 "trsvcid": "4420", 01:13:12.038 "trtype": "TCP" 01:13:12.038 }, 01:13:12.038 "peer_address": { 01:13:12.038 "adrfam": "IPv4", 01:13:12.038 "traddr": "10.0.0.1", 01:13:12.038 "trsvcid": "37282", 01:13:12.038 "trtype": "TCP" 01:13:12.038 }, 01:13:12.038 "qid": 0, 01:13:12.038 "state": "enabled", 01:13:12.038 "thread": "nvmf_tgt_poll_group_000" 01:13:12.038 } 01:13:12.038 ]' 01:13:12.038 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:12.296 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:12.296 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:12.296 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:13:12.296 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:12.296 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:12.296 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:12.296 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:12.554 11:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:13.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:13.120 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:13.378 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:13.636 01:13:13.894 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:13.894 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:13.894 11:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:13.894 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:13.894 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:13.894 11:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:13.894 11:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:13.894 11:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:13.894 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:13.894 { 01:13:13.894 "auth": { 01:13:13.894 "dhgroup": "null", 01:13:13.894 "digest": "sha512", 01:13:13.894 "state": "completed" 01:13:13.894 }, 01:13:13.894 "cntlid": 101, 01:13:13.894 "listen_address": { 01:13:13.894 "adrfam": "IPv4", 01:13:13.894 "traddr": "10.0.0.2", 01:13:13.894 "trsvcid": "4420", 01:13:13.894 "trtype": "TCP" 01:13:13.894 }, 01:13:13.894 "peer_address": { 01:13:13.894 "adrfam": "IPv4", 01:13:13.894 "traddr": "10.0.0.1", 01:13:13.894 "trsvcid": "37308", 01:13:13.894 "trtype": "TCP" 01:13:13.894 }, 01:13:13.894 "qid": 0, 01:13:13.894 "state": "enabled", 01:13:13.894 "thread": "nvmf_tgt_poll_group_000" 01:13:13.894 } 01:13:13.894 ]' 01:13:13.894 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:14.152 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:14.152 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:14.152 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:13:14.152 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:14.152 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:14.152 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:14.152 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:14.437 11:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:15.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:15.016 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:15.274 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:15.840 01:13:15.840 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:15.840 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:15.840 11:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:15.840 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:15.840 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:15.840 11:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:15.840 11:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:15.840 11:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:15.840 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:15.840 { 01:13:15.840 "auth": { 01:13:15.840 "dhgroup": "null", 01:13:15.840 "digest": "sha512", 01:13:15.840 "state": "completed" 01:13:15.840 }, 01:13:15.840 "cntlid": 103, 01:13:15.840 "listen_address": { 01:13:15.840 "adrfam": "IPv4", 01:13:15.840 "traddr": "10.0.0.2", 01:13:15.840 "trsvcid": "4420", 01:13:15.840 "trtype": "TCP" 01:13:15.840 }, 01:13:15.840 "peer_address": { 01:13:15.840 "adrfam": "IPv4", 01:13:15.840 "traddr": "10.0.0.1", 01:13:15.840 "trsvcid": "37338", 01:13:15.840 "trtype": "TCP" 01:13:15.840 }, 01:13:15.840 "qid": 0, 01:13:15.840 "state": "enabled", 01:13:15.840 "thread": "nvmf_tgt_poll_group_000" 01:13:15.840 } 01:13:15.840 ]' 01:13:15.840 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:16.098 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:16.098 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:16.098 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 01:13:16.098 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:16.098 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:16.098 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:16.098 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:16.355 11:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:17.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:17.285 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:17.848 01:13:17.848 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:17.848 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:17.848 11:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:18.105 { 01:13:18.105 "auth": { 01:13:18.105 "dhgroup": "ffdhe2048", 01:13:18.105 "digest": "sha512", 01:13:18.105 "state": "completed" 01:13:18.105 }, 01:13:18.105 "cntlid": 105, 01:13:18.105 "listen_address": { 01:13:18.105 "adrfam": "IPv4", 01:13:18.105 "traddr": "10.0.0.2", 01:13:18.105 "trsvcid": "4420", 01:13:18.105 "trtype": "TCP" 01:13:18.105 }, 01:13:18.105 "peer_address": { 01:13:18.105 "adrfam": "IPv4", 01:13:18.105 "traddr": "10.0.0.1", 01:13:18.105 "trsvcid": "37358", 01:13:18.105 "trtype": "TCP" 01:13:18.105 }, 01:13:18.105 "qid": 0, 01:13:18.105 "state": "enabled", 01:13:18.105 "thread": "nvmf_tgt_poll_group_000" 01:13:18.105 } 01:13:18.105 ]' 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:18.105 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:18.668 11:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:19.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:19.234 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:19.491 11:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:20.055 01:13:20.055 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:20.055 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:20.055 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:20.313 { 01:13:20.313 "auth": { 01:13:20.313 "dhgroup": "ffdhe2048", 01:13:20.313 "digest": "sha512", 01:13:20.313 "state": "completed" 01:13:20.313 }, 01:13:20.313 "cntlid": 107, 01:13:20.313 "listen_address": { 01:13:20.313 "adrfam": "IPv4", 01:13:20.313 "traddr": "10.0.0.2", 01:13:20.313 "trsvcid": "4420", 01:13:20.313 "trtype": "TCP" 01:13:20.313 }, 01:13:20.313 "peer_address": { 01:13:20.313 "adrfam": "IPv4", 01:13:20.313 "traddr": "10.0.0.1", 01:13:20.313 "trsvcid": "58320", 01:13:20.313 "trtype": "TCP" 01:13:20.313 }, 01:13:20.313 "qid": 0, 01:13:20.313 "state": "enabled", 01:13:20.313 "thread": "nvmf_tgt_poll_group_000" 01:13:20.313 } 01:13:20.313 ]' 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:20.313 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:20.879 11:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:21.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:21.444 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:21.701 11:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:21.958 01:13:21.958 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:21.958 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:21.958 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:22.525 { 01:13:22.525 "auth": { 01:13:22.525 "dhgroup": "ffdhe2048", 01:13:22.525 "digest": "sha512", 01:13:22.525 "state": "completed" 01:13:22.525 }, 01:13:22.525 "cntlid": 109, 01:13:22.525 "listen_address": { 01:13:22.525 "adrfam": "IPv4", 01:13:22.525 "traddr": "10.0.0.2", 01:13:22.525 "trsvcid": "4420", 01:13:22.525 "trtype": "TCP" 01:13:22.525 }, 01:13:22.525 "peer_address": { 01:13:22.525 "adrfam": "IPv4", 01:13:22.525 "traddr": "10.0.0.1", 01:13:22.525 "trsvcid": "58344", 01:13:22.525 "trtype": "TCP" 01:13:22.525 }, 01:13:22.525 "qid": 0, 01:13:22.525 "state": "enabled", 01:13:22.525 "thread": "nvmf_tgt_poll_group_000" 01:13:22.525 } 01:13:22.525 ]' 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:22.525 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:22.782 11:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:13:23.713 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:23.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:23.713 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:23.713 11:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:23.714 11:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:24.278 01:13:24.278 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:24.278 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:24.278 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:24.536 { 01:13:24.536 "auth": { 01:13:24.536 "dhgroup": "ffdhe2048", 01:13:24.536 "digest": "sha512", 01:13:24.536 "state": "completed" 01:13:24.536 }, 01:13:24.536 "cntlid": 111, 01:13:24.536 "listen_address": { 01:13:24.536 "adrfam": "IPv4", 01:13:24.536 "traddr": "10.0.0.2", 01:13:24.536 "trsvcid": "4420", 01:13:24.536 "trtype": "TCP" 01:13:24.536 }, 01:13:24.536 "peer_address": { 01:13:24.536 "adrfam": "IPv4", 01:13:24.536 "traddr": "10.0.0.1", 01:13:24.536 "trsvcid": "58362", 01:13:24.536 "trtype": "TCP" 01:13:24.536 }, 01:13:24.536 "qid": 0, 01:13:24.536 "state": "enabled", 01:13:24.536 "thread": "nvmf_tgt_poll_group_000" 01:13:24.536 } 01:13:24.536 ]' 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:24.536 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:24.794 11:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:25.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:25.360 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:25.926 11:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:26.184 01:13:26.184 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:26.184 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:26.185 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:26.442 { 01:13:26.442 "auth": { 01:13:26.442 "dhgroup": "ffdhe3072", 01:13:26.442 "digest": "sha512", 01:13:26.442 "state": "completed" 01:13:26.442 }, 01:13:26.442 "cntlid": 113, 01:13:26.442 "listen_address": { 01:13:26.442 "adrfam": "IPv4", 01:13:26.442 "traddr": "10.0.0.2", 01:13:26.442 "trsvcid": "4420", 01:13:26.442 "trtype": "TCP" 01:13:26.442 }, 01:13:26.442 "peer_address": { 01:13:26.442 "adrfam": "IPv4", 01:13:26.442 "traddr": "10.0.0.1", 01:13:26.442 "trsvcid": "58394", 01:13:26.442 "trtype": "TCP" 01:13:26.442 }, 01:13:26.442 "qid": 0, 01:13:26.442 "state": "enabled", 01:13:26.442 "thread": "nvmf_tgt_poll_group_000" 01:13:26.442 } 01:13:26.442 ]' 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:26.442 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:26.699 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:13:26.699 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:26.699 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:26.699 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:26.699 11:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:26.958 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:27.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:27.532 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:27.790 11:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:28.355 01:13:28.355 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:28.355 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:28.355 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:28.613 { 01:13:28.613 "auth": { 01:13:28.613 "dhgroup": "ffdhe3072", 01:13:28.613 "digest": "sha512", 01:13:28.613 "state": "completed" 01:13:28.613 }, 01:13:28.613 "cntlid": 115, 01:13:28.613 "listen_address": { 01:13:28.613 "adrfam": "IPv4", 01:13:28.613 "traddr": "10.0.0.2", 01:13:28.613 "trsvcid": "4420", 01:13:28.613 "trtype": "TCP" 01:13:28.613 }, 01:13:28.613 "peer_address": { 01:13:28.613 "adrfam": "IPv4", 01:13:28.613 "traddr": "10.0.0.1", 01:13:28.613 "trsvcid": "58422", 01:13:28.613 "trtype": "TCP" 01:13:28.613 }, 01:13:28.613 "qid": 0, 01:13:28.613 "state": "enabled", 01:13:28.613 "thread": "nvmf_tgt_poll_group_000" 01:13:28.613 } 01:13:28.613 ]' 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:28.613 11:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:29.188 11:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:29.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:29.809 11:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:30.067 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:30.325 01:13:30.325 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:30.325 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:30.325 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:30.584 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:30.584 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:30.584 11:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:30.584 11:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:30.584 11:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:30.584 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:30.584 { 01:13:30.584 "auth": { 01:13:30.584 "dhgroup": "ffdhe3072", 01:13:30.584 "digest": "sha512", 01:13:30.584 "state": "completed" 01:13:30.584 }, 01:13:30.584 "cntlid": 117, 01:13:30.584 "listen_address": { 01:13:30.584 "adrfam": "IPv4", 01:13:30.584 "traddr": "10.0.0.2", 01:13:30.584 "trsvcid": "4420", 01:13:30.584 "trtype": "TCP" 01:13:30.584 }, 01:13:30.584 "peer_address": { 01:13:30.584 "adrfam": "IPv4", 01:13:30.584 "traddr": "10.0.0.1", 01:13:30.584 "trsvcid": "58966", 01:13:30.584 "trtype": "TCP" 01:13:30.584 }, 01:13:30.584 "qid": 0, 01:13:30.584 "state": "enabled", 01:13:30.584 "thread": "nvmf_tgt_poll_group_000" 01:13:30.584 } 01:13:30.584 ]' 01:13:30.584 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:30.842 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:30.842 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:30.842 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:13:30.842 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:30.842 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:30.842 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:30.842 11:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:31.099 11:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:32.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:32.031 11:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:32.031 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:32.595 01:13:32.595 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:32.595 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:32.595 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:32.852 { 01:13:32.852 "auth": { 01:13:32.852 "dhgroup": "ffdhe3072", 01:13:32.852 "digest": "sha512", 01:13:32.852 "state": "completed" 01:13:32.852 }, 01:13:32.852 "cntlid": 119, 01:13:32.852 "listen_address": { 01:13:32.852 "adrfam": "IPv4", 01:13:32.852 "traddr": "10.0.0.2", 01:13:32.852 "trsvcid": "4420", 01:13:32.852 "trtype": "TCP" 01:13:32.852 }, 01:13:32.852 "peer_address": { 01:13:32.852 "adrfam": "IPv4", 01:13:32.852 "traddr": "10.0.0.1", 01:13:32.852 "trsvcid": "58994", 01:13:32.852 "trtype": "TCP" 01:13:32.852 }, 01:13:32.852 "qid": 0, 01:13:32.852 "state": "enabled", 01:13:32.852 "thread": "nvmf_tgt_poll_group_000" 01:13:32.852 } 01:13:32.852 ]' 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:13:32.852 11:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:32.852 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:32.852 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:32.852 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:33.416 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:33.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:33.980 11:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:34.237 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:34.238 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:34.495 01:13:34.495 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:34.495 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:34.495 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:35.060 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:35.060 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:35.060 11:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:35.060 11:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:35.060 11:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:35.060 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:35.060 { 01:13:35.060 "auth": { 01:13:35.060 "dhgroup": "ffdhe4096", 01:13:35.060 "digest": "sha512", 01:13:35.060 "state": "completed" 01:13:35.060 }, 01:13:35.060 "cntlid": 121, 01:13:35.060 "listen_address": { 01:13:35.060 "adrfam": "IPv4", 01:13:35.060 "traddr": "10.0.0.2", 01:13:35.060 "trsvcid": "4420", 01:13:35.060 "trtype": "TCP" 01:13:35.060 }, 01:13:35.060 "peer_address": { 01:13:35.060 "adrfam": "IPv4", 01:13:35.060 "traddr": "10.0.0.1", 01:13:35.060 "trsvcid": "59024", 01:13:35.060 "trtype": "TCP" 01:13:35.060 }, 01:13:35.060 "qid": 0, 01:13:35.060 "state": "enabled", 01:13:35.060 "thread": "nvmf_tgt_poll_group_000" 01:13:35.060 } 01:13:35.060 ]' 01:13:35.060 11:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:35.060 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:35.060 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:35.060 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:13:35.060 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:35.060 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:35.060 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:35.060 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:35.318 11:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:35.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:35.883 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:36.449 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 01:13:36.449 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:36.449 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:36.449 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:13:36.449 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:13:36.450 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:36.450 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:36.450 11:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:36.450 11:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:36.450 11:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:36.450 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:36.450 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:36.708 01:13:36.708 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:36.708 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:36.708 11:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:36.967 { 01:13:36.967 "auth": { 01:13:36.967 "dhgroup": "ffdhe4096", 01:13:36.967 "digest": "sha512", 01:13:36.967 "state": "completed" 01:13:36.967 }, 01:13:36.967 "cntlid": 123, 01:13:36.967 "listen_address": { 01:13:36.967 "adrfam": "IPv4", 01:13:36.967 "traddr": "10.0.0.2", 01:13:36.967 "trsvcid": "4420", 01:13:36.967 "trtype": "TCP" 01:13:36.967 }, 01:13:36.967 "peer_address": { 01:13:36.967 "adrfam": "IPv4", 01:13:36.967 "traddr": "10.0.0.1", 01:13:36.967 "trsvcid": "59060", 01:13:36.967 "trtype": "TCP" 01:13:36.967 }, 01:13:36.967 "qid": 0, 01:13:36.967 "state": "enabled", 01:13:36.967 "thread": "nvmf_tgt_poll_group_000" 01:13:36.967 } 01:13:36.967 ]' 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:36.967 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:37.225 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:13:37.225 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:37.225 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:37.225 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:37.225 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:37.483 11:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:38.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:38.048 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:38.306 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:38.872 01:13:38.872 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:38.872 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:38.872 11:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:39.129 { 01:13:39.129 "auth": { 01:13:39.129 "dhgroup": "ffdhe4096", 01:13:39.129 "digest": "sha512", 01:13:39.129 "state": "completed" 01:13:39.129 }, 01:13:39.129 "cntlid": 125, 01:13:39.129 "listen_address": { 01:13:39.129 "adrfam": "IPv4", 01:13:39.129 "traddr": "10.0.0.2", 01:13:39.129 "trsvcid": "4420", 01:13:39.129 "trtype": "TCP" 01:13:39.129 }, 01:13:39.129 "peer_address": { 01:13:39.129 "adrfam": "IPv4", 01:13:39.129 "traddr": "10.0.0.1", 01:13:39.129 "trsvcid": "59086", 01:13:39.129 "trtype": "TCP" 01:13:39.129 }, 01:13:39.129 "qid": 0, 01:13:39.129 "state": "enabled", 01:13:39.129 "thread": "nvmf_tgt_poll_group_000" 01:13:39.129 } 01:13:39.129 ]' 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:39.129 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:39.693 11:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:40.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:40.257 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:40.515 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:40.773 01:13:40.773 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:40.773 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:40.773 11:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:41.029 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:41.029 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:41.029 11:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:41.029 11:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:41.029 11:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:41.029 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:41.029 { 01:13:41.029 "auth": { 01:13:41.029 "dhgroup": "ffdhe4096", 01:13:41.029 "digest": "sha512", 01:13:41.029 "state": "completed" 01:13:41.029 }, 01:13:41.029 "cntlid": 127, 01:13:41.029 "listen_address": { 01:13:41.029 "adrfam": "IPv4", 01:13:41.029 "traddr": "10.0.0.2", 01:13:41.029 "trsvcid": "4420", 01:13:41.029 "trtype": "TCP" 01:13:41.029 }, 01:13:41.029 "peer_address": { 01:13:41.029 "adrfam": "IPv4", 01:13:41.029 "traddr": "10.0.0.1", 01:13:41.029 "trsvcid": "51590", 01:13:41.029 "trtype": "TCP" 01:13:41.029 }, 01:13:41.029 "qid": 0, 01:13:41.029 "state": "enabled", 01:13:41.029 "thread": "nvmf_tgt_poll_group_000" 01:13:41.029 } 01:13:41.029 ]' 01:13:41.029 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:41.287 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:41.287 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:41.287 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:13:41.287 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:41.287 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:41.287 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:41.287 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:41.544 11:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:42.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:42.476 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:42.477 11:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:42.477 11:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:42.477 11:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:42.477 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:42.477 11:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:43.041 01:13:43.041 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:43.041 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:43.041 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:43.300 { 01:13:43.300 "auth": { 01:13:43.300 "dhgroup": "ffdhe6144", 01:13:43.300 "digest": "sha512", 01:13:43.300 "state": "completed" 01:13:43.300 }, 01:13:43.300 "cntlid": 129, 01:13:43.300 "listen_address": { 01:13:43.300 "adrfam": "IPv4", 01:13:43.300 "traddr": "10.0.0.2", 01:13:43.300 "trsvcid": "4420", 01:13:43.300 "trtype": "TCP" 01:13:43.300 }, 01:13:43.300 "peer_address": { 01:13:43.300 "adrfam": "IPv4", 01:13:43.300 "traddr": "10.0.0.1", 01:13:43.300 "trsvcid": "51624", 01:13:43.300 "trtype": "TCP" 01:13:43.300 }, 01:13:43.300 "qid": 0, 01:13:43.300 "state": "enabled", 01:13:43.300 "thread": "nvmf_tgt_poll_group_000" 01:13:43.300 } 01:13:43.300 ]' 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:43.300 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:43.558 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:13:43.558 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:43.558 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:43.558 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:43.558 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:43.830 11:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:44.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:44.410 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:44.668 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:44.669 11:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:45.236 01:13:45.236 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:45.236 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:45.236 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:45.496 { 01:13:45.496 "auth": { 01:13:45.496 "dhgroup": "ffdhe6144", 01:13:45.496 "digest": "sha512", 01:13:45.496 "state": "completed" 01:13:45.496 }, 01:13:45.496 "cntlid": 131, 01:13:45.496 "listen_address": { 01:13:45.496 "adrfam": "IPv4", 01:13:45.496 "traddr": "10.0.0.2", 01:13:45.496 "trsvcid": "4420", 01:13:45.496 "trtype": "TCP" 01:13:45.496 }, 01:13:45.496 "peer_address": { 01:13:45.496 "adrfam": "IPv4", 01:13:45.496 "traddr": "10.0.0.1", 01:13:45.496 "trsvcid": "51644", 01:13:45.496 "trtype": "TCP" 01:13:45.496 }, 01:13:45.496 "qid": 0, 01:13:45.496 "state": "enabled", 01:13:45.496 "thread": "nvmf_tgt_poll_group_000" 01:13:45.496 } 01:13:45.496 ]' 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:45.496 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:45.754 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:13:45.755 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:45.755 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:45.755 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:45.755 11:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:46.013 11:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:46.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:46.580 11:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:46.838 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:47.403 01:13:47.403 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:47.403 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:47.403 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:47.661 { 01:13:47.661 "auth": { 01:13:47.661 "dhgroup": "ffdhe6144", 01:13:47.661 "digest": "sha512", 01:13:47.661 "state": "completed" 01:13:47.661 }, 01:13:47.661 "cntlid": 133, 01:13:47.661 "listen_address": { 01:13:47.661 "adrfam": "IPv4", 01:13:47.661 "traddr": "10.0.0.2", 01:13:47.661 "trsvcid": "4420", 01:13:47.661 "trtype": "TCP" 01:13:47.661 }, 01:13:47.661 "peer_address": { 01:13:47.661 "adrfam": "IPv4", 01:13:47.661 "traddr": "10.0.0.1", 01:13:47.661 "trsvcid": "51676", 01:13:47.661 "trtype": "TCP" 01:13:47.661 }, 01:13:47.661 "qid": 0, 01:13:47.661 "state": "enabled", 01:13:47.661 "thread": "nvmf_tgt_poll_group_000" 01:13:47.661 } 01:13:47.661 ]' 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:13:47.661 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:47.933 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:47.933 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:47.933 11:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:47.933 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:48.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:48.500 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:48.758 11:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:49.324 01:13:49.324 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:49.324 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:49.324 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:49.582 { 01:13:49.582 "auth": { 01:13:49.582 "dhgroup": "ffdhe6144", 01:13:49.582 "digest": "sha512", 01:13:49.582 "state": "completed" 01:13:49.582 }, 01:13:49.582 "cntlid": 135, 01:13:49.582 "listen_address": { 01:13:49.582 "adrfam": "IPv4", 01:13:49.582 "traddr": "10.0.0.2", 01:13:49.582 "trsvcid": "4420", 01:13:49.582 "trtype": "TCP" 01:13:49.582 }, 01:13:49.582 "peer_address": { 01:13:49.582 "adrfam": "IPv4", 01:13:49.582 "traddr": "10.0.0.1", 01:13:49.582 "trsvcid": "34336", 01:13:49.582 "trtype": "TCP" 01:13:49.582 }, 01:13:49.582 "qid": 0, 01:13:49.582 "state": "enabled", 01:13:49.582 "thread": "nvmf_tgt_poll_group_000" 01:13:49.582 } 01:13:49.582 ]' 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:49.582 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:49.840 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:13:49.840 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:49.840 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:49.840 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:49.840 11:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:50.097 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:50.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:50.663 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:50.920 11:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:51.486 01:13:51.486 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:51.486 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:51.486 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:51.744 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:51.744 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:51.744 11:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:51.744 11:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:51.744 11:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:51.744 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:51.744 { 01:13:51.744 "auth": { 01:13:51.744 "dhgroup": "ffdhe8192", 01:13:51.744 "digest": "sha512", 01:13:51.744 "state": "completed" 01:13:51.744 }, 01:13:51.745 "cntlid": 137, 01:13:51.745 "listen_address": { 01:13:51.745 "adrfam": "IPv4", 01:13:51.745 "traddr": "10.0.0.2", 01:13:51.745 "trsvcid": "4420", 01:13:51.745 "trtype": "TCP" 01:13:51.745 }, 01:13:51.745 "peer_address": { 01:13:51.745 "adrfam": "IPv4", 01:13:51.745 "traddr": "10.0.0.1", 01:13:51.745 "trsvcid": "34378", 01:13:51.745 "trtype": "TCP" 01:13:51.745 }, 01:13:51.745 "qid": 0, 01:13:51.745 "state": "enabled", 01:13:51.745 "thread": "nvmf_tgt_poll_group_000" 01:13:51.745 } 01:13:51.745 ]' 01:13:51.745 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:51.745 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:51.745 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:51.745 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:51.745 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:52.004 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:52.004 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:52.004 11:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:52.263 11:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:52.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:52.832 11:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:53.090 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:13:53.657 01:13:53.657 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:53.657 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:53.657 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:53.915 { 01:13:53.915 "auth": { 01:13:53.915 "dhgroup": "ffdhe8192", 01:13:53.915 "digest": "sha512", 01:13:53.915 "state": "completed" 01:13:53.915 }, 01:13:53.915 "cntlid": 139, 01:13:53.915 "listen_address": { 01:13:53.915 "adrfam": "IPv4", 01:13:53.915 "traddr": "10.0.0.2", 01:13:53.915 "trsvcid": "4420", 01:13:53.915 "trtype": "TCP" 01:13:53.915 }, 01:13:53.915 "peer_address": { 01:13:53.915 "adrfam": "IPv4", 01:13:53.915 "traddr": "10.0.0.1", 01:13:53.915 "trsvcid": "34414", 01:13:53.915 "trtype": "TCP" 01:13:53.915 }, 01:13:53.915 "qid": 0, 01:13:53.915 "state": "enabled", 01:13:53.915 "thread": "nvmf_tgt_poll_group_000" 01:13:53.915 } 01:13:53.915 ]' 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:53.915 11:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:53.915 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:53.915 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:53.915 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:53.915 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:53.915 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:54.173 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:01:YTY3YzYwODg1ZWIzYjJlZThmNjljNDdkZTJmZDc3NDMkEHoJ: --dhchap-ctrl-secret DHHC-1:02:MjQ1M2Y0MzczMDQ4OGY3NjY0NmJkNmY2ZmRhMjk1NDI3NjZkZTA5MzQyZGM0NDAzSjhaFA==: 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:54.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:54.738 11:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:54.996 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:13:55.559 01:13:55.559 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:55.559 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:55.559 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:55.815 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:55.816 { 01:13:55.816 "auth": { 01:13:55.816 "dhgroup": "ffdhe8192", 01:13:55.816 "digest": "sha512", 01:13:55.816 "state": "completed" 01:13:55.816 }, 01:13:55.816 "cntlid": 141, 01:13:55.816 "listen_address": { 01:13:55.816 "adrfam": "IPv4", 01:13:55.816 "traddr": "10.0.0.2", 01:13:55.816 "trsvcid": "4420", 01:13:55.816 "trtype": "TCP" 01:13:55.816 }, 01:13:55.816 "peer_address": { 01:13:55.816 "adrfam": "IPv4", 01:13:55.816 "traddr": "10.0.0.1", 01:13:55.816 "trsvcid": "34430", 01:13:55.816 "trtype": "TCP" 01:13:55.816 }, 01:13:55.816 "qid": 0, 01:13:55.816 "state": "enabled", 01:13:55.816 "thread": "nvmf_tgt_poll_group_000" 01:13:55.816 } 01:13:55.816 ]' 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:55.816 11:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:56.073 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:56.073 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:56.073 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:56.073 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:56.073 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:56.330 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:02:ZTEyNGM2ZjljZmIzMGZlNDZiMzNhZWExNDU2OTRjYTJkN2QwYmI3ZGNjZGI2MjE1Tnm2Mg==: --dhchap-ctrl-secret DHHC-1:01:MjA4YWI5YzQzM2RmNTY0NTZkMTVjZmU1NzkxZDdmZTY2vFjH: 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:56.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:56.896 11:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:57.153 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:13:57.719 01:13:57.719 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:57.719 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:57.719 11:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:57.976 { 01:13:57.976 "auth": { 01:13:57.976 "dhgroup": "ffdhe8192", 01:13:57.976 "digest": "sha512", 01:13:57.976 "state": "completed" 01:13:57.976 }, 01:13:57.976 "cntlid": 143, 01:13:57.976 "listen_address": { 01:13:57.976 "adrfam": "IPv4", 01:13:57.976 "traddr": "10.0.0.2", 01:13:57.976 "trsvcid": "4420", 01:13:57.976 "trtype": "TCP" 01:13:57.976 }, 01:13:57.976 "peer_address": { 01:13:57.976 "adrfam": "IPv4", 01:13:57.976 "traddr": "10.0.0.1", 01:13:57.976 "trsvcid": "34454", 01:13:57.976 "trtype": "TCP" 01:13:57.976 }, 01:13:57.976 "qid": 0, 01:13:57.976 "state": "enabled", 01:13:57.976 "thread": "nvmf_tgt_poll_group_000" 01:13:57.976 } 01:13:57.976 ]' 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:13:57.976 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:13:58.234 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:13:58.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:13:58.806 11:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:59.114 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:13:59.680 01:13:59.680 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:13:59.680 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:13:59.680 11:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:13:59.938 { 01:13:59.938 "auth": { 01:13:59.938 "dhgroup": "ffdhe8192", 01:13:59.938 "digest": "sha512", 01:13:59.938 "state": "completed" 01:13:59.938 }, 01:13:59.938 "cntlid": 145, 01:13:59.938 "listen_address": { 01:13:59.938 "adrfam": "IPv4", 01:13:59.938 "traddr": "10.0.0.2", 01:13:59.938 "trsvcid": "4420", 01:13:59.938 "trtype": "TCP" 01:13:59.938 }, 01:13:59.938 "peer_address": { 01:13:59.938 "adrfam": "IPv4", 01:13:59.938 "traddr": "10.0.0.1", 01:13:59.938 "trsvcid": "52852", 01:13:59.938 "trtype": "TCP" 01:13:59.938 }, 01:13:59.938 "qid": 0, 01:13:59.938 "state": "enabled", 01:13:59.938 "thread": "nvmf_tgt_poll_group_000" 01:13:59.938 } 01:13:59.938 ]' 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:13:59.938 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:14:00.195 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:14:00.195 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:14:00.195 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:14:00.453 11:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:00:ZGM5MDFiZWUwNjY3NzY2MmU4Y2E5ODQzMjkwODE1OWVmMzE5MzNhZjRhN2YzNGNiKT7p5A==: --dhchap-ctrl-secret DHHC-1:03:M2EzNzRkODYzNTM0NmRjZGVjZWQ3NWY3YmY5Mzg3MTc1OTgzZGI2ZjY4YjU3OTJkNTU1ODEwODRkNzNiY2Y4ZVEWKfw=: 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:14:01.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:14:01.018 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 01:14:01.582 2024/07/22 11:11:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:01.582 request: 01:14:01.582 { 01:14:01.582 "method": "bdev_nvme_attach_controller", 01:14:01.582 "params": { 01:14:01.582 "name": "nvme0", 01:14:01.582 "trtype": "tcp", 01:14:01.582 "traddr": "10.0.0.2", 01:14:01.582 "adrfam": "ipv4", 01:14:01.582 "trsvcid": "4420", 01:14:01.582 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:14:01.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479", 01:14:01.582 "prchk_reftag": false, 01:14:01.582 "prchk_guard": false, 01:14:01.582 "hdgst": false, 01:14:01.582 "ddgst": false, 01:14:01.582 "dhchap_key": "key2" 01:14:01.582 } 01:14:01.582 } 01:14:01.582 Got JSON-RPC error response 01:14:01.582 GoRPCClient: error on JSON-RPC call 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:14:01.582 11:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:14:02.146 2024/07/22 11:11:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:02.146 request: 01:14:02.146 { 01:14:02.146 "method": "bdev_nvme_attach_controller", 01:14:02.146 "params": { 01:14:02.146 "name": "nvme0", 01:14:02.146 "trtype": "tcp", 01:14:02.146 "traddr": "10.0.0.2", 01:14:02.146 "adrfam": "ipv4", 01:14:02.146 "trsvcid": "4420", 01:14:02.146 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:14:02.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479", 01:14:02.146 "prchk_reftag": false, 01:14:02.146 "prchk_guard": false, 01:14:02.146 "hdgst": false, 01:14:02.146 "ddgst": false, 01:14:02.146 "dhchap_key": "key1", 01:14:02.146 "dhchap_ctrlr_key": "ckey2" 01:14:02.146 } 01:14:02.146 } 01:14:02.146 Got JSON-RPC error response 01:14:02.146 GoRPCClient: error on JSON-RPC call 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key1 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:02.146 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:14:02.403 11:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:14:02.710 2024/07/22 11:11:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:02.710 request: 01:14:02.710 { 01:14:02.710 "method": "bdev_nvme_attach_controller", 01:14:02.710 "params": { 01:14:02.710 "name": "nvme0", 01:14:02.710 "trtype": "tcp", 01:14:02.710 "traddr": "10.0.0.2", 01:14:02.710 "adrfam": "ipv4", 01:14:02.710 "trsvcid": "4420", 01:14:02.710 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:14:02.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479", 01:14:02.710 "prchk_reftag": false, 01:14:02.710 "prchk_guard": false, 01:14:02.710 "hdgst": false, 01:14:02.710 "ddgst": false, 01:14:02.710 "dhchap_key": "key1", 01:14:02.710 "dhchap_ctrlr_key": "ckey1" 01:14:02.710 } 01:14:02.710 } 01:14:02.710 Got JSON-RPC error response 01:14:02.710 GoRPCClient: error on JSON-RPC call 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 94102 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 94102 ']' 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 94102 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94102 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:14:02.710 killing process with pid 94102 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94102' 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 94102 01:14:02.710 11:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 94102 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=98956 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 98956 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 98956 ']' 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:02.967 11:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:03.901 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:03.901 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:14:03.901 11:11:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:14:03.901 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 01:14:03.901 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 98956 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 98956 ']' 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:04.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:04.159 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:04.417 11:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:04.983 01:14:04.983 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 01:14:04.983 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 01:14:04.983 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 01:14:05.241 { 01:14:05.241 "auth": { 01:14:05.241 "dhgroup": "ffdhe8192", 01:14:05.241 "digest": "sha512", 01:14:05.241 "state": "completed" 01:14:05.241 }, 01:14:05.241 "cntlid": 1, 01:14:05.241 "listen_address": { 01:14:05.241 "adrfam": "IPv4", 01:14:05.241 "traddr": "10.0.0.2", 01:14:05.241 "trsvcid": "4420", 01:14:05.241 "trtype": "TCP" 01:14:05.241 }, 01:14:05.241 "peer_address": { 01:14:05.241 "adrfam": "IPv4", 01:14:05.241 "traddr": "10.0.0.1", 01:14:05.241 "trsvcid": "52902", 01:14:05.241 "trtype": "TCP" 01:14:05.241 }, 01:14:05.241 "qid": 0, 01:14:05.241 "state": "enabled", 01:14:05.241 "thread": "nvmf_tgt_poll_group_000" 01:14:05.241 } 01:14:05.241 ]' 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:14:05.241 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 01:14:05.498 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:14:05.498 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 01:14:05.498 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:14:05.755 11:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid 8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-secret DHHC-1:03:N2YxNmQwODVkZTc4MzIyNzhlNTZlYjEwZmUxYTBiZjM3NjM0MmE0M2ZlMzkxNGQ5N2VjNTE0NWYyYzhmYTRmOe6MyQA=: 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:14:06.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --dhchap-key key3 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 01:14:06.320 11:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:06.578 11:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:07.145 2024/07/22 11:11:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:07.145 request: 01:14:07.145 { 01:14:07.145 "method": "bdev_nvme_attach_controller", 01:14:07.145 "params": { 01:14:07.145 "name": "nvme0", 01:14:07.145 "trtype": "tcp", 01:14:07.145 "traddr": "10.0.0.2", 01:14:07.145 "adrfam": "ipv4", 01:14:07.145 "trsvcid": "4420", 01:14:07.145 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:14:07.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479", 01:14:07.145 "prchk_reftag": false, 01:14:07.145 "prchk_guard": false, 01:14:07.145 "hdgst": false, 01:14:07.145 "ddgst": false, 01:14:07.145 "dhchap_key": "key3" 01:14:07.145 } 01:14:07.145 } 01:14:07.145 Got JSON-RPC error response 01:14:07.145 GoRPCClient: error on JSON-RPC call 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:07.145 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 01:14:07.404 2024/07/22 11:11:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:07.404 request: 01:14:07.404 { 01:14:07.404 "method": "bdev_nvme_attach_controller", 01:14:07.404 "params": { 01:14:07.404 "name": "nvme0", 01:14:07.404 "trtype": "tcp", 01:14:07.404 "traddr": "10.0.0.2", 01:14:07.404 "adrfam": "ipv4", 01:14:07.404 "trsvcid": "4420", 01:14:07.404 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:14:07.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479", 01:14:07.404 "prchk_reftag": false, 01:14:07.404 "prchk_guard": false, 01:14:07.404 "hdgst": false, 01:14:07.404 "ddgst": false, 01:14:07.404 "dhchap_key": "key3" 01:14:07.404 } 01:14:07.404 } 01:14:07.404 Got JSON-RPC error response 01:14:07.404 GoRPCClient: error on JSON-RPC call 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:14:07.404 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:14:07.664 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:07.664 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:07.664 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:07.664 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:07.664 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:07.664 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:07.664 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:07.922 11:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:14:07.923 11:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:14:08.180 2024/07/22 11:11:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:08.180 request: 01:14:08.180 { 01:14:08.180 "method": "bdev_nvme_attach_controller", 01:14:08.180 "params": { 01:14:08.180 "name": "nvme0", 01:14:08.180 "trtype": "tcp", 01:14:08.180 "traddr": "10.0.0.2", 01:14:08.180 "adrfam": "ipv4", 01:14:08.180 "trsvcid": "4420", 01:14:08.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:14:08.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479", 01:14:08.180 "prchk_reftag": false, 01:14:08.180 "prchk_guard": false, 01:14:08.180 "hdgst": false, 01:14:08.180 "ddgst": false, 01:14:08.180 "dhchap_key": "key0", 01:14:08.180 "dhchap_ctrlr_key": "key1" 01:14:08.180 } 01:14:08.180 } 01:14:08.180 Got JSON-RPC error response 01:14:08.180 GoRPCClient: error on JSON-RPC call 01:14:08.180 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 01:14:08.180 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:08.180 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:08.180 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:08.180 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 01:14:08.180 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 01:14:08.438 01:14:08.438 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 01:14:08.438 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 01:14:08.438 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:14:08.438 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:14:08.438 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 01:14:08.438 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 94146 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 94146 ']' 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 94146 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94146 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:14:08.695 killing process with pid 94146 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94146' 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 94146 01:14:08.695 11:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 94146 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:14:09.260 rmmod nvme_tcp 01:14:09.260 rmmod nvme_fabrics 01:14:09.260 rmmod nvme_keyring 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 98956 ']' 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 98956 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 98956 ']' 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 98956 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:09.260 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98956 01:14:09.519 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:14:09.519 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:14:09.519 killing process with pid 98956 01:14:09.519 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98956' 01:14:09.519 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 98956 01:14:09.519 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 98956 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.efi /tmp/spdk.key-sha256.3V3 /tmp/spdk.key-sha384.Nhz /tmp/spdk.key-sha512.1cc /tmp/spdk.key-sha512.gzH /tmp/spdk.key-sha384.hcX /tmp/spdk.key-sha256.iQ8 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 01:14:09.777 01:14:09.777 real 2m50.832s 01:14:09.777 user 6m55.432s 01:14:09.777 sys 0m23.608s 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 01:14:09.777 11:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:14:09.777 ************************************ 01:14:09.777 END TEST nvmf_auth_target 01:14:09.777 ************************************ 01:14:09.777 11:11:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:14:09.777 11:11:14 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 01:14:09.777 11:11:14 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:14:09.777 11:11:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 01:14:09.777 11:11:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:14:09.777 11:11:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:09.777 ************************************ 01:14:09.777 START TEST nvmf_bdevio_no_huge 01:14:09.777 ************************************ 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:14:09.777 * Looking for test storage... 01:14:09.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:09.777 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:10.037 11:11:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:14:10.037 Cannot find device "nvmf_tgt_br" 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:14:10.037 Cannot find device "nvmf_tgt_br2" 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:14:10.037 Cannot find device "nvmf_tgt_br" 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:14:10.037 Cannot find device "nvmf_tgt_br2" 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:10.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:10.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:14:10.037 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:14:10.038 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:14:10.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:14:10.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 01:14:10.296 01:14:10.296 --- 10.0.0.2 ping statistics --- 01:14:10.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:10.296 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:14:10.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:14:10.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 01:14:10.296 01:14:10.296 --- 10.0.0.3 ping statistics --- 01:14:10.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:10.296 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:14:10.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:14:10.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 01:14:10.296 01:14:10.296 --- 10.0.0.1 ping statistics --- 01:14:10.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:10.296 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=99359 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 99359 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 99359 ']' 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:10.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:10.296 11:11:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:10.296 [2024-07-22 11:11:15.440924] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:10.296 [2024-07-22 11:11:15.441021] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 01:14:10.554 [2024-07-22 11:11:15.587758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:14:10.554 [2024-07-22 11:11:15.701222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:14:10.554 [2024-07-22 11:11:15.701290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:14:10.554 [2024-07-22 11:11:15.701304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:10.554 [2024-07-22 11:11:15.701315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:10.554 [2024-07-22 11:11:15.701324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:14:10.554 [2024-07-22 11:11:15.701500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 01:14:10.554 [2024-07-22 11:11:15.702018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 01:14:10.554 [2024-07-22 11:11:15.702104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 01:14:10.554 [2024-07-22 11:11:15.702119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:11.487 [2024-07-22 11:11:16.496626] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:11.487 Malloc0 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:11.487 [2024-07-22 11:11:16.536881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 01:14:11.487 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 01:14:11.488 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:14:11.488 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:14:11.488 { 01:14:11.488 "params": { 01:14:11.488 "name": "Nvme$subsystem", 01:14:11.488 "trtype": "$TEST_TRANSPORT", 01:14:11.488 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:11.488 "adrfam": "ipv4", 01:14:11.488 "trsvcid": "$NVMF_PORT", 01:14:11.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:11.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:11.488 "hdgst": ${hdgst:-false}, 01:14:11.488 "ddgst": ${ddgst:-false} 01:14:11.488 }, 01:14:11.488 "method": "bdev_nvme_attach_controller" 01:14:11.488 } 01:14:11.488 EOF 01:14:11.488 )") 01:14:11.488 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 01:14:11.488 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 01:14:11.488 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 01:14:11.488 11:11:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:14:11.488 "params": { 01:14:11.488 "name": "Nvme1", 01:14:11.488 "trtype": "tcp", 01:14:11.488 "traddr": "10.0.0.2", 01:14:11.488 "adrfam": "ipv4", 01:14:11.488 "trsvcid": "4420", 01:14:11.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:14:11.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:14:11.488 "hdgst": false, 01:14:11.488 "ddgst": false 01:14:11.488 }, 01:14:11.488 "method": "bdev_nvme_attach_controller" 01:14:11.488 }' 01:14:11.488 [2024-07-22 11:11:16.589438] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:11.488 [2024-07-22 11:11:16.589521] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid99413 ] 01:14:11.746 [2024-07-22 11:11:16.724445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:14:11.746 [2024-07-22 11:11:16.874066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:14:11.746 [2024-07-22 11:11:16.874226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:11.746 [2024-07-22 11:11:16.874235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:14:12.004 I/O targets: 01:14:12.004 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:14:12.004 01:14:12.004 01:14:12.004 CUnit - A unit testing framework for C - Version 2.1-3 01:14:12.004 http://cunit.sourceforge.net/ 01:14:12.004 01:14:12.004 01:14:12.004 Suite: bdevio tests on: Nvme1n1 01:14:12.004 Test: blockdev write read block ...passed 01:14:12.004 Test: blockdev write zeroes read block ...passed 01:14:12.004 Test: blockdev write zeroes read no split ...passed 01:14:12.004 Test: blockdev write zeroes read split ...passed 01:14:12.004 Test: blockdev write zeroes read split partial ...passed 01:14:12.004 Test: blockdev reset ...[2024-07-22 11:11:17.203453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:14:12.004 [2024-07-22 11:11:17.203561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eafb50 (9): Bad file descriptor 01:14:12.262 [2024-07-22 11:11:17.217997] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:14:12.262 passed 01:14:12.262 Test: blockdev write read 8 blocks ...passed 01:14:12.262 Test: blockdev write read size > 128k ...passed 01:14:12.262 Test: blockdev write read invalid size ...passed 01:14:12.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:14:12.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:14:12.262 Test: blockdev write read max offset ...passed 01:14:12.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:14:12.262 Test: blockdev writev readv 8 blocks ...passed 01:14:12.262 Test: blockdev writev readv 30 x 1block ...passed 01:14:12.262 Test: blockdev writev readv block ...passed 01:14:12.262 Test: blockdev writev readv size > 128k ...passed 01:14:12.262 Test: blockdev writev readv size > 128k in two iovs ...passed 01:14:12.262 Test: blockdev comparev and writev ...[2024-07-22 11:11:17.397787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.397845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:14:12.262 [2024-07-22 11:11:17.397864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.397889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:14:12.262 [2024-07-22 11:11:17.398337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.398383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:14:12.262 [2024-07-22 11:11:17.398400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.398410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:14:12.262 [2024-07-22 11:11:17.398785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:14:12.262 [2024-07-22 11:11:17.398896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.398908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:14:12.262 [2024-07-22 11:11:17.399249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.399267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:14:12.262 [2024-07-22 11:11:17.399282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:14:12.262 [2024-07-22 11:11:17.399292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:14:12.262 passed 01:14:12.520 Test: blockdev nvme passthru rw ...passed 01:14:12.520 Test: blockdev nvme passthru vendor specific ...[2024-07-22 11:11:17.483296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:14:12.520 [2024-07-22 11:11:17.483327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:14:12.520 passed 01:14:12.520 Test: blockdev nvme admin passthru ...[2024-07-22 11:11:17.483716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:14:12.520 [2024-07-22 11:11:17.483740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:14:12.520 [2024-07-22 11:11:17.483878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:14:12.520 [2024-07-22 11:11:17.483895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:14:12.520 [2024-07-22 11:11:17.484063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:14:12.520 [2024-07-22 11:11:17.484080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:14:12.520 passed 01:14:12.520 Test: blockdev copy ...passed 01:14:12.520 01:14:12.520 Run Summary: Type Total Ran Passed Failed Inactive 01:14:12.520 suites 1 1 n/a 0 0 01:14:12.520 tests 23 23 23 0 0 01:14:12.520 asserts 152 152 152 0 n/a 01:14:12.520 01:14:12.520 Elapsed time = 0.925 seconds 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 01:14:12.781 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 01:14:13.039 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:14:13.039 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 01:14:13.039 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 01:14:13.039 11:11:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:14:13.039 rmmod nvme_tcp 01:14:13.039 rmmod nvme_fabrics 01:14:13.039 rmmod nvme_keyring 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 99359 ']' 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 99359 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 99359 ']' 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 99359 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99359 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 01:14:13.039 killing process with pid 99359 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99359' 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 99359 01:14:13.039 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 99359 01:14:13.298 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:14:13.298 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:14:13.298 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:14:13.298 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:14:13.298 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 01:14:13.299 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:13.299 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:13.299 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:13.299 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:14:13.299 01:14:13.299 real 0m3.597s 01:14:13.299 user 0m13.032s 01:14:13.299 sys 0m1.361s 01:14:13.299 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 01:14:13.299 11:11:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:14:13.299 ************************************ 01:14:13.299 END TEST nvmf_bdevio_no_huge 01:14:13.299 ************************************ 01:14:13.558 11:11:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:14:13.558 11:11:18 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:14:13.558 11:11:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:14:13.558 11:11:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:14:13.558 11:11:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:13.558 ************************************ 01:14:13.558 START TEST nvmf_tls 01:14:13.558 ************************************ 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:14:13.558 * Looking for test storage... 01:14:13.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:13.558 11:11:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:14:13.559 Cannot find device "nvmf_tgt_br" 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:14:13.559 Cannot find device "nvmf_tgt_br2" 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:14:13.559 Cannot find device "nvmf_tgt_br" 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:14:13.559 Cannot find device "nvmf_tgt_br2" 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:14:13.559 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:13.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:13.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:14:13.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:14:13.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 01:14:13.818 01:14:13.818 --- 10.0.0.2 ping statistics --- 01:14:13.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:13.818 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:14:13.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:14:13.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 01:14:13.818 01:14:13.818 --- 10.0.0.3 ping statistics --- 01:14:13.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:13.818 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:14:13.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:14:13.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 01:14:13.818 01:14:13.818 --- 10.0.0.1 ping statistics --- 01:14:13.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:13.818 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99602 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99602 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99602 ']' 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:13.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:13.818 11:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:14.077 [2024-07-22 11:11:19.047944] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:14.077 [2024-07-22 11:11:19.048025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:14:14.077 [2024-07-22 11:11:19.184570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:14.077 [2024-07-22 11:11:19.269410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:14:14.077 [2024-07-22 11:11:19.269477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:14:14.077 [2024-07-22 11:11:19.269491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:14.077 [2024-07-22 11:11:19.269502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:14.077 [2024-07-22 11:11:19.269511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:14:14.078 [2024-07-22 11:11:19.269541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 01:14:15.013 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 01:14:15.272 true 01:14:15.272 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 01:14:15.272 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:14:15.531 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 01:14:15.531 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 01:14:15.531 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:14:15.531 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:14:15.531 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 01:14:15.790 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 01:14:15.790 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 01:14:15.790 11:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 01:14:16.049 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:14:16.049 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 01:14:16.307 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 01:14:16.307 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 01:14:16.307 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:14:16.307 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 01:14:16.564 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 01:14:16.564 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 01:14:16.564 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 01:14:16.823 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:14:16.823 11:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 01:14:16.823 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 01:14:16.823 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 01:14:16.823 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 01:14:17.081 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:14:17.081 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.JT9ZbINlTu 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.3x3eKd9fNx 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.JT9ZbINlTu 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3x3eKd9fNx 01:14:17.340 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:14:17.597 11:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:14:18.162 11:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.JT9ZbINlTu 01:14:18.162 11:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JT9ZbINlTu 01:14:18.162 11:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:14:18.162 [2024-07-22 11:11:23.307971] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:18.162 11:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:14:18.419 11:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:14:18.676 [2024-07-22 11:11:23.688024] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:14:18.676 [2024-07-22 11:11:23.688277] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:14:18.676 11:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:14:18.934 malloc0 01:14:18.934 11:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:14:18.934 11:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JT9ZbINlTu 01:14:19.191 [2024-07-22 11:11:24.320140] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:14:19.191 11:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.JT9ZbINlTu 01:14:31.424 Initializing NVMe Controllers 01:14:31.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:14:31.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:14:31.424 Initialization complete. Launching workers. 01:14:31.424 ======================================================== 01:14:31.424 Latency(us) 01:14:31.424 Device Information : IOPS MiB/s Average min max 01:14:31.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11456.40 44.75 5587.38 1588.70 13604.64 01:14:31.424 ======================================================== 01:14:31.424 Total : 11456.40 44.75 5587.38 1588.70 13604.64 01:14:31.424 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JT9ZbINlTu 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JT9ZbINlTu' 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99948 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99948 /var/tmp/bdevperf.sock 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99948 ']' 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:14:31.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:31.424 11:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:14:31.424 [2024-07-22 11:11:34.597114] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:31.424 [2024-07-22 11:11:34.597228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99948 ] 01:14:31.424 [2024-07-22 11:11:34.737753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:31.424 [2024-07-22 11:11:34.824069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:31.424 11:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:31.424 11:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:31.424 11:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JT9ZbINlTu 01:14:31.424 [2024-07-22 11:11:35.712155] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:31.424 [2024-07-22 11:11:35.712262] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:14:31.424 TLSTESTn1 01:14:31.424 11:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:14:31.424 Running I/O for 10 seconds... 01:14:41.385 01:14:41.385 Latency(us) 01:14:41.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:41.385 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:14:41.385 Verification LBA range: start 0x0 length 0x2000 01:14:41.385 TLSTESTn1 : 10.02 4517.29 17.65 0.00 0.00 28285.21 6017.40 39321.60 01:14:41.385 =================================================================================================================== 01:14:41.385 Total : 4517.29 17.65 0.00 0.00 28285.21 6017.40 39321.60 01:14:41.385 0 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99948 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99948 ']' 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99948 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99948 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:14:41.385 killing process with pid 99948 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99948' 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99948 01:14:41.385 11:11:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99948 01:14:41.385 Received shutdown signal, test time was about 10.000000 seconds 01:14:41.385 01:14:41.385 Latency(us) 01:14:41.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:41.385 =================================================================================================================== 01:14:41.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:14:41.385 [2024-07-22 11:11:45.979589] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3x3eKd9fNx 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3x3eKd9fNx 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3x3eKd9fNx 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3x3eKd9fNx' 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100094 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100094 /var/tmp/bdevperf.sock 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100094 ']' 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:41.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:41.385 11:11:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:41.385 [2024-07-22 11:11:46.304143] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:41.385 [2024-07-22 11:11:46.304234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100094 ] 01:14:41.385 [2024-07-22 11:11:46.442038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:41.385 [2024-07-22 11:11:46.503353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3x3eKd9fNx 01:14:42.320 [2024-07-22 11:11:47.437439] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:42.320 [2024-07-22 11:11:47.437559] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:14:42.320 [2024-07-22 11:11:47.449298] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:14:42.320 [2024-07-22 11:11:47.449399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda8970 (107): Transport endpoint is not connected 01:14:42.320 [2024-07-22 11:11:47.450374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda8970 (9): Bad file descriptor 01:14:42.320 [2024-07-22 11:11:47.451371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:14:42.320 [2024-07-22 11:11:47.451395] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:14:42.320 [2024-07-22 11:11:47.451425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:14:42.320 2024/07/22 11:11:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.3x3eKd9fNx subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:42.320 request: 01:14:42.320 { 01:14:42.320 "method": "bdev_nvme_attach_controller", 01:14:42.320 "params": { 01:14:42.320 "name": "TLSTEST", 01:14:42.320 "trtype": "tcp", 01:14:42.320 "traddr": "10.0.0.2", 01:14:42.320 "adrfam": "ipv4", 01:14:42.320 "trsvcid": "4420", 01:14:42.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:14:42.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:14:42.320 "prchk_reftag": false, 01:14:42.320 "prchk_guard": false, 01:14:42.320 "hdgst": false, 01:14:42.320 "ddgst": false, 01:14:42.320 "psk": "/tmp/tmp.3x3eKd9fNx" 01:14:42.320 } 01:14:42.320 } 01:14:42.320 Got JSON-RPC error response 01:14:42.320 GoRPCClient: error on JSON-RPC call 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100094 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100094 ']' 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100094 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100094 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:14:42.320 killing process with pid 100094 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100094' 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100094 01:14:42.320 Received shutdown signal, test time was about 10.000000 seconds 01:14:42.320 01:14:42.320 Latency(us) 01:14:42.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:42.320 =================================================================================================================== 01:14:42.320 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:14:42.320 [2024-07-22 11:11:47.499996] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:14:42.320 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100094 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JT9ZbINlTu 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JT9ZbINlTu 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JT9ZbINlTu 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JT9ZbINlTu' 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100140 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100140 /var/tmp/bdevperf.sock 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100140 ']' 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:14:42.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:42.578 11:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:42.836 [2024-07-22 11:11:47.799018] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:42.836 [2024-07-22 11:11:47.799093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100140 ] 01:14:42.836 [2024-07-22 11:11:47.931087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:42.836 [2024-07-22 11:11:47.998256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:43.093 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:43.094 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:43.094 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.JT9ZbINlTu 01:14:43.352 [2024-07-22 11:11:48.373947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:43.352 [2024-07-22 11:11:48.374100] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:14:43.352 [2024-07-22 11:11:48.379392] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:14:43.352 [2024-07-22 11:11:48.379451] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:14:43.352 [2024-07-22 11:11:48.379568] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:14:43.352 [2024-07-22 11:11:48.380063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898970 (107): Transport endpoint is not connected 01:14:43.352 [2024-07-22 11:11:48.381172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898970 (9): Bad file descriptor 01:14:43.352 [2024-07-22 11:11:48.382165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:14:43.352 [2024-07-22 11:11:48.382190] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:14:43.352 [2024-07-22 11:11:48.382225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:14:43.353 2024/07/22 11:11:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.JT9ZbINlTu subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:43.353 request: 01:14:43.353 { 01:14:43.353 "method": "bdev_nvme_attach_controller", 01:14:43.353 "params": { 01:14:43.353 "name": "TLSTEST", 01:14:43.353 "trtype": "tcp", 01:14:43.353 "traddr": "10.0.0.2", 01:14:43.353 "adrfam": "ipv4", 01:14:43.353 "trsvcid": "4420", 01:14:43.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:14:43.353 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:14:43.353 "prchk_reftag": false, 01:14:43.353 "prchk_guard": false, 01:14:43.353 "hdgst": false, 01:14:43.353 "ddgst": false, 01:14:43.353 "psk": "/tmp/tmp.JT9ZbINlTu" 01:14:43.353 } 01:14:43.353 } 01:14:43.353 Got JSON-RPC error response 01:14:43.353 GoRPCClient: error on JSON-RPC call 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100140 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100140 ']' 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100140 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100140 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:14:43.353 killing process with pid 100140 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100140' 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100140 01:14:43.353 Received shutdown signal, test time was about 10.000000 seconds 01:14:43.353 01:14:43.353 Latency(us) 01:14:43.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:43.353 =================================================================================================================== 01:14:43.353 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:14:43.353 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100140 01:14:43.353 [2024-07-22 11:11:48.434031] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JT9ZbINlTu 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JT9ZbINlTu 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JT9ZbINlTu 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JT9ZbINlTu' 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100172 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100172 /var/tmp/bdevperf.sock 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100172 ']' 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:43.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:43.612 11:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:43.612 [2024-07-22 11:11:48.754677] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:43.612 [2024-07-22 11:11:48.754766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100172 ] 01:14:43.870 [2024-07-22 11:11:48.884141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:43.870 [2024-07-22 11:11:48.951372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:44.803 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:44.803 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:44.803 11:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JT9ZbINlTu 01:14:44.804 [2024-07-22 11:11:49.891863] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:44.804 [2024-07-22 11:11:49.892003] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:14:44.804 [2024-07-22 11:11:49.899581] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:14:44.804 [2024-07-22 11:11:49.899652] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:14:44.804 [2024-07-22 11:11:49.899730] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:14:44.804 [2024-07-22 11:11:49.899913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2970 (107): Transport endpoint is not connected 01:14:44.804 [2024-07-22 11:11:49.900902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2970 (9): Bad file descriptor 01:14:44.804 [2024-07-22 11:11:49.901898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 01:14:44.804 [2024-07-22 11:11:49.901923] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:14:44.804 [2024-07-22 11:11:49.901942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 01:14:44.804 2024/07/22 11:11:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.JT9ZbINlTu subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:44.804 request: 01:14:44.804 { 01:14:44.804 "method": "bdev_nvme_attach_controller", 01:14:44.804 "params": { 01:14:44.804 "name": "TLSTEST", 01:14:44.804 "trtype": "tcp", 01:14:44.804 "traddr": "10.0.0.2", 01:14:44.804 "adrfam": "ipv4", 01:14:44.804 "trsvcid": "4420", 01:14:44.804 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:14:44.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:14:44.804 "prchk_reftag": false, 01:14:44.804 "prchk_guard": false, 01:14:44.804 "hdgst": false, 01:14:44.804 "ddgst": false, 01:14:44.804 "psk": "/tmp/tmp.JT9ZbINlTu" 01:14:44.804 } 01:14:44.804 } 01:14:44.804 Got JSON-RPC error response 01:14:44.804 GoRPCClient: error on JSON-RPC call 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100172 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100172 ']' 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100172 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100172 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:14:44.804 killing process with pid 100172 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100172' 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100172 01:14:44.804 Received shutdown signal, test time was about 10.000000 seconds 01:14:44.804 01:14:44.804 Latency(us) 01:14:44.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:44.804 =================================================================================================================== 01:14:44.804 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:14:44.804 11:11:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100172 01:14:44.804 [2024-07-22 11:11:49.952231] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100212 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100212 /var/tmp/bdevperf.sock 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100212 ']' 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:45.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:45.062 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:45.320 [2024-07-22 11:11:50.278260] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:45.320 [2024-07-22 11:11:50.278342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100212 ] 01:14:45.320 [2024-07-22 11:11:50.408676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:45.320 [2024-07-22 11:11:50.475183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:45.579 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:45.579 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:45.579 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:14:45.838 [2024-07-22 11:11:50.799779] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:14:45.838 [2024-07-22 11:11:50.801721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bc7d0 (9): Bad file descriptor 01:14:45.838 [2024-07-22 11:11:50.802715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:14:45.838 [2024-07-22 11:11:50.802742] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 01:14:45.838 [2024-07-22 11:11:50.802757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:14:45.838 2024/07/22 11:11:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:45.838 request: 01:14:45.838 { 01:14:45.838 "method": "bdev_nvme_attach_controller", 01:14:45.838 "params": { 01:14:45.838 "name": "TLSTEST", 01:14:45.838 "trtype": "tcp", 01:14:45.838 "traddr": "10.0.0.2", 01:14:45.838 "adrfam": "ipv4", 01:14:45.838 "trsvcid": "4420", 01:14:45.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:14:45.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:14:45.838 "prchk_reftag": false, 01:14:45.838 "prchk_guard": false, 01:14:45.838 "hdgst": false, 01:14:45.838 "ddgst": false 01:14:45.838 } 01:14:45.838 } 01:14:45.838 Got JSON-RPC error response 01:14:45.838 GoRPCClient: error on JSON-RPC call 01:14:45.838 11:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100212 01:14:45.838 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100212 ']' 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100212 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100212 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:14:45.839 killing process with pid 100212 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100212' 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100212 01:14:45.839 Received shutdown signal, test time was about 10.000000 seconds 01:14:45.839 01:14:45.839 Latency(us) 01:14:45.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:45.839 =================================================================================================================== 01:14:45.839 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:14:45.839 11:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100212 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 99602 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99602 ']' 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99602 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99602 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:14:46.097 killing process with pid 99602 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99602' 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99602 01:14:46.097 [2024-07-22 11:11:51.123855] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:14:46.097 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99602 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.NMZuEDqCJg 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.NMZuEDqCJg 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100254 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100254 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100254 ']' 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:46.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:46.355 11:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:46.355 [2024-07-22 11:11:51.425742] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:46.355 [2024-07-22 11:11:51.425824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:14:46.613 [2024-07-22 11:11:51.564422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:46.613 [2024-07-22 11:11:51.646887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:14:46.613 [2024-07-22 11:11:51.646950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:14:46.614 [2024-07-22 11:11:51.646989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:46.614 [2024-07-22 11:11:51.647004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:46.614 [2024-07-22 11:11:51.647014] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:14:46.614 [2024-07-22 11:11:51.647045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.NMZuEDqCJg 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NMZuEDqCJg 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:14:47.546 [2024-07-22 11:11:52.688175] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:47.546 11:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:14:47.804 11:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:14:48.061 [2024-07-22 11:11:53.144308] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:14:48.061 [2024-07-22 11:11:53.144569] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:14:48.061 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:14:48.319 malloc0 01:14:48.319 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:14:48.577 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMZuEDqCJg 01:14:48.835 [2024-07-22 11:11:53.819120] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NMZuEDqCJg 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NMZuEDqCJg' 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100357 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100357 /var/tmp/bdevperf.sock 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100357 ']' 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:14:48.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:14:48.835 11:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:14:48.835 [2024-07-22 11:11:53.880440] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:14:48.835 [2024-07-22 11:11:53.880506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100357 ] 01:14:48.835 [2024-07-22 11:11:54.012475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:49.093 [2024-07-22 11:11:54.086283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:14:49.659 11:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:14:49.659 11:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:14:49.659 11:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMZuEDqCJg 01:14:49.916 [2024-07-22 11:11:54.927760] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:49.917 [2024-07-22 11:11:54.927882] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:14:49.917 TLSTESTn1 01:14:49.917 11:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:14:50.174 Running I/O for 10 seconds... 01:15:00.141 01:15:00.141 Latency(us) 01:15:00.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:00.141 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:15:00.141 Verification LBA range: start 0x0 length 0x2000 01:15:00.141 TLSTESTn1 : 10.02 4465.88 17.44 0.00 0.00 28606.32 6166.34 29908.25 01:15:00.141 =================================================================================================================== 01:15:00.141 Total : 4465.88 17.44 0.00 0.00 28606.32 6166.34 29908.25 01:15:00.141 0 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100357 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100357 ']' 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100357 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100357 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:15:00.141 killing process with pid 100357 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100357' 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100357 01:15:00.141 Received shutdown signal, test time was about 10.000000 seconds 01:15:00.141 01:15:00.141 Latency(us) 01:15:00.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:00.141 =================================================================================================================== 01:15:00.141 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:00.141 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100357 01:15:00.141 [2024-07-22 11:12:05.208377] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.NMZuEDqCJg 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NMZuEDqCJg 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NMZuEDqCJg 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NMZuEDqCJg 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NMZuEDqCJg' 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100504 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100504 /var/tmp/bdevperf.sock 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100504 ']' 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:00.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:00.399 11:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:00.399 [2024-07-22 11:12:05.524469] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:00.399 [2024-07-22 11:12:05.524571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100504 ] 01:15:00.656 [2024-07-22 11:12:05.664889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:00.656 [2024-07-22 11:12:05.727015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:15:01.221 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:01.221 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:01.221 11:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMZuEDqCJg 01:15:01.494 [2024-07-22 11:12:06.585294] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:01.494 [2024-07-22 11:12:06.585393] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 01:15:01.494 [2024-07-22 11:12:06.585403] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.NMZuEDqCJg 01:15:01.494 2024/07/22 11:12:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.NMZuEDqCJg subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 01:15:01.494 request: 01:15:01.494 { 01:15:01.494 "method": "bdev_nvme_attach_controller", 01:15:01.494 "params": { 01:15:01.494 "name": "TLSTEST", 01:15:01.494 "trtype": "tcp", 01:15:01.494 "traddr": "10.0.0.2", 01:15:01.494 "adrfam": "ipv4", 01:15:01.494 "trsvcid": "4420", 01:15:01.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:15:01.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:15:01.494 "prchk_reftag": false, 01:15:01.494 "prchk_guard": false, 01:15:01.494 "hdgst": false, 01:15:01.494 "ddgst": false, 01:15:01.494 "psk": "/tmp/tmp.NMZuEDqCJg" 01:15:01.494 } 01:15:01.494 } 01:15:01.494 Got JSON-RPC error response 01:15:01.494 GoRPCClient: error on JSON-RPC call 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100504 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100504 ']' 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100504 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100504 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:15:01.494 killing process with pid 100504 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100504' 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100504 01:15:01.494 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100504 01:15:01.494 Received shutdown signal, test time was about 10.000000 seconds 01:15:01.494 01:15:01.494 Latency(us) 01:15:01.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:01.494 =================================================================================================================== 01:15:01.494 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 100254 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100254 ']' 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100254 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100254 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:01.752 killing process with pid 100254 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100254' 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100254 01:15:01.752 [2024-07-22 11:12:06.918406] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:15:01.752 11:12:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100254 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100555 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100555 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100555 ']' 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:02.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:02.010 11:12:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:02.010 [2024-07-22 11:12:07.162223] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:02.010 [2024-07-22 11:12:07.162306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:02.268 [2024-07-22 11:12:07.289428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:02.268 [2024-07-22 11:12:07.352891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:02.268 [2024-07-22 11:12:07.352995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:02.268 [2024-07-22 11:12:07.353016] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:02.268 [2024-07-22 11:12:07.353024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:02.268 [2024-07-22 11:12:07.353031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:02.268 [2024-07-22 11:12:07.353054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:02.834 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:02.834 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:02.834 11:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:02.834 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:02.834 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:03.090 11:12:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:03.090 11:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.NMZuEDqCJg 01:15:03.090 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 01:15:03.090 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.NMZuEDqCJg 01:15:03.090 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 01:15:03.090 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:03.091 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 01:15:03.091 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:03.091 11:12:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.NMZuEDqCJg 01:15:03.091 11:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NMZuEDqCJg 01:15:03.091 11:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:15:03.091 [2024-07-22 11:12:08.273284] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:03.091 11:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:15:03.348 11:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:15:03.605 [2024-07-22 11:12:08.661347] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:03.605 [2024-07-22 11:12:08.661540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:03.605 11:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:15:03.862 malloc0 01:15:03.862 11:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:15:04.118 11:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMZuEDqCJg 01:15:04.376 [2024-07-22 11:12:09.383616] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 01:15:04.376 [2024-07-22 11:12:09.383715] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 01:15:04.376 [2024-07-22 11:12:09.383744] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 01:15:04.376 2024/07/22 11:12:09 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.NMZuEDqCJg], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 01:15:04.376 request: 01:15:04.376 { 01:15:04.376 "method": "nvmf_subsystem_add_host", 01:15:04.376 "params": { 01:15:04.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:04.376 "host": "nqn.2016-06.io.spdk:host1", 01:15:04.376 "psk": "/tmp/tmp.NMZuEDqCJg" 01:15:04.376 } 01:15:04.376 } 01:15:04.376 Got JSON-RPC error response 01:15:04.376 GoRPCClient: error on JSON-RPC call 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 100555 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100555 ']' 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100555 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100555 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:04.376 killing process with pid 100555 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100555' 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100555 01:15:04.376 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100555 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.NMZuEDqCJg 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100659 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100659 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100659 ']' 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:04.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:04.635 11:12:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:04.635 [2024-07-22 11:12:09.675718] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:04.635 [2024-07-22 11:12:09.675783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:04.635 [2024-07-22 11:12:09.807408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:04.892 [2024-07-22 11:12:09.871113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:04.892 [2024-07-22 11:12:09.871170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:04.892 [2024-07-22 11:12:09.871180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:04.892 [2024-07-22 11:12:09.871188] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:04.892 [2024-07-22 11:12:09.871194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:04.892 [2024-07-22 11:12:09.871222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.NMZuEDqCJg 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NMZuEDqCJg 01:15:05.455 11:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:15:05.713 [2024-07-22 11:12:10.800537] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:05.713 11:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:15:05.970 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:15:06.226 [2024-07-22 11:12:11.248604] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:06.226 [2024-07-22 11:12:11.248787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:06.226 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:15:06.483 malloc0 01:15:06.483 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:15:06.483 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMZuEDqCJg 01:15:06.740 [2024-07-22 11:12:11.831277] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=100756 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 100756 /var/tmp/bdevperf.sock 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100756 ']' 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:06.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:06.740 11:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:06.740 [2024-07-22 11:12:11.902450] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:06.740 [2024-07-22 11:12:11.902540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100756 ] 01:15:06.996 [2024-07-22 11:12:12.039816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:06.996 [2024-07-22 11:12:12.127464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:15:07.925 11:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:07.925 11:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:07.925 11:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMZuEDqCJg 01:15:07.925 [2024-07-22 11:12:13.057260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:07.925 [2024-07-22 11:12:13.057432] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:15:07.925 TLSTESTn1 01:15:08.182 11:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:15:08.439 11:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 01:15:08.439 "subsystems": [ 01:15:08.439 { 01:15:08.439 "subsystem": "keyring", 01:15:08.439 "config": [] 01:15:08.439 }, 01:15:08.439 { 01:15:08.439 "subsystem": "iobuf", 01:15:08.439 "config": [ 01:15:08.439 { 01:15:08.439 "method": "iobuf_set_options", 01:15:08.439 "params": { 01:15:08.439 "large_bufsize": 135168, 01:15:08.439 "large_pool_count": 1024, 01:15:08.439 "small_bufsize": 8192, 01:15:08.439 "small_pool_count": 8192 01:15:08.439 } 01:15:08.439 } 01:15:08.439 ] 01:15:08.439 }, 01:15:08.439 { 01:15:08.439 "subsystem": "sock", 01:15:08.439 "config": [ 01:15:08.439 { 01:15:08.439 "method": "sock_set_default_impl", 01:15:08.439 "params": { 01:15:08.439 "impl_name": "posix" 01:15:08.439 } 01:15:08.439 }, 01:15:08.439 { 01:15:08.439 "method": "sock_impl_set_options", 01:15:08.439 "params": { 01:15:08.439 "enable_ktls": false, 01:15:08.439 "enable_placement_id": 0, 01:15:08.439 "enable_quickack": false, 01:15:08.439 "enable_recv_pipe": true, 01:15:08.439 "enable_zerocopy_send_client": false, 01:15:08.439 "enable_zerocopy_send_server": true, 01:15:08.439 "impl_name": "ssl", 01:15:08.439 "recv_buf_size": 4096, 01:15:08.439 "send_buf_size": 4096, 01:15:08.439 "tls_version": 0, 01:15:08.439 "zerocopy_threshold": 0 01:15:08.439 } 01:15:08.439 }, 01:15:08.439 { 01:15:08.439 "method": "sock_impl_set_options", 01:15:08.439 "params": { 01:15:08.439 "enable_ktls": false, 01:15:08.439 "enable_placement_id": 0, 01:15:08.439 "enable_quickack": false, 01:15:08.439 "enable_recv_pipe": true, 01:15:08.439 "enable_zerocopy_send_client": false, 01:15:08.439 "enable_zerocopy_send_server": true, 01:15:08.439 "impl_name": "posix", 01:15:08.439 "recv_buf_size": 2097152, 01:15:08.439 "send_buf_size": 2097152, 01:15:08.439 "tls_version": 0, 01:15:08.439 "zerocopy_threshold": 0 01:15:08.439 } 01:15:08.439 } 01:15:08.439 ] 01:15:08.439 }, 01:15:08.439 { 01:15:08.439 "subsystem": "vmd", 01:15:08.439 "config": [] 01:15:08.439 }, 01:15:08.439 { 01:15:08.439 "subsystem": "accel", 01:15:08.439 "config": [ 01:15:08.439 { 01:15:08.439 "method": "accel_set_options", 01:15:08.440 "params": { 01:15:08.440 "buf_count": 2048, 01:15:08.440 "large_cache_size": 16, 01:15:08.440 "sequence_count": 2048, 01:15:08.440 "small_cache_size": 128, 01:15:08.440 "task_count": 2048 01:15:08.440 } 01:15:08.440 } 01:15:08.440 ] 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "subsystem": "bdev", 01:15:08.440 "config": [ 01:15:08.440 { 01:15:08.440 "method": "bdev_set_options", 01:15:08.440 "params": { 01:15:08.440 "bdev_auto_examine": true, 01:15:08.440 "bdev_io_cache_size": 256, 01:15:08.440 "bdev_io_pool_size": 65535, 01:15:08.440 "iobuf_large_cache_size": 16, 01:15:08.440 "iobuf_small_cache_size": 128 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "bdev_raid_set_options", 01:15:08.440 "params": { 01:15:08.440 "process_max_bandwidth_mb_sec": 0, 01:15:08.440 "process_window_size_kb": 1024 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "bdev_iscsi_set_options", 01:15:08.440 "params": { 01:15:08.440 "timeout_sec": 30 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "bdev_nvme_set_options", 01:15:08.440 "params": { 01:15:08.440 "action_on_timeout": "none", 01:15:08.440 "allow_accel_sequence": false, 01:15:08.440 "arbitration_burst": 0, 01:15:08.440 "bdev_retry_count": 3, 01:15:08.440 "ctrlr_loss_timeout_sec": 0, 01:15:08.440 "delay_cmd_submit": true, 01:15:08.440 "dhchap_dhgroups": [ 01:15:08.440 "null", 01:15:08.440 "ffdhe2048", 01:15:08.440 "ffdhe3072", 01:15:08.440 "ffdhe4096", 01:15:08.440 "ffdhe6144", 01:15:08.440 "ffdhe8192" 01:15:08.440 ], 01:15:08.440 "dhchap_digests": [ 01:15:08.440 "sha256", 01:15:08.440 "sha384", 01:15:08.440 "sha512" 01:15:08.440 ], 01:15:08.440 "disable_auto_failback": false, 01:15:08.440 "fast_io_fail_timeout_sec": 0, 01:15:08.440 "generate_uuids": false, 01:15:08.440 "high_priority_weight": 0, 01:15:08.440 "io_path_stat": false, 01:15:08.440 "io_queue_requests": 0, 01:15:08.440 "keep_alive_timeout_ms": 10000, 01:15:08.440 "low_priority_weight": 0, 01:15:08.440 "medium_priority_weight": 0, 01:15:08.440 "nvme_adminq_poll_period_us": 10000, 01:15:08.440 "nvme_error_stat": false, 01:15:08.440 "nvme_ioq_poll_period_us": 0, 01:15:08.440 "rdma_cm_event_timeout_ms": 0, 01:15:08.440 "rdma_max_cq_size": 0, 01:15:08.440 "rdma_srq_size": 0, 01:15:08.440 "reconnect_delay_sec": 0, 01:15:08.440 "timeout_admin_us": 0, 01:15:08.440 "timeout_us": 0, 01:15:08.440 "transport_ack_timeout": 0, 01:15:08.440 "transport_retry_count": 4, 01:15:08.440 "transport_tos": 0 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "bdev_nvme_set_hotplug", 01:15:08.440 "params": { 01:15:08.440 "enable": false, 01:15:08.440 "period_us": 100000 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "bdev_malloc_create", 01:15:08.440 "params": { 01:15:08.440 "block_size": 4096, 01:15:08.440 "name": "malloc0", 01:15:08.440 "num_blocks": 8192, 01:15:08.440 "optimal_io_boundary": 0, 01:15:08.440 "physical_block_size": 4096, 01:15:08.440 "uuid": "56db2dc1-f406-4e57-bd88-3cd4450689d7" 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "bdev_wait_for_examine" 01:15:08.440 } 01:15:08.440 ] 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "subsystem": "nbd", 01:15:08.440 "config": [] 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "subsystem": "scheduler", 01:15:08.440 "config": [ 01:15:08.440 { 01:15:08.440 "method": "framework_set_scheduler", 01:15:08.440 "params": { 01:15:08.440 "name": "static" 01:15:08.440 } 01:15:08.440 } 01:15:08.440 ] 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "subsystem": "nvmf", 01:15:08.440 "config": [ 01:15:08.440 { 01:15:08.440 "method": "nvmf_set_config", 01:15:08.440 "params": { 01:15:08.440 "admin_cmd_passthru": { 01:15:08.440 "identify_ctrlr": false 01:15:08.440 }, 01:15:08.440 "discovery_filter": "match_any" 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "nvmf_set_max_subsystems", 01:15:08.440 "params": { 01:15:08.440 "max_subsystems": 1024 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "nvmf_set_crdt", 01:15:08.440 "params": { 01:15:08.440 "crdt1": 0, 01:15:08.440 "crdt2": 0, 01:15:08.440 "crdt3": 0 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "nvmf_create_transport", 01:15:08.440 "params": { 01:15:08.440 "abort_timeout_sec": 1, 01:15:08.440 "ack_timeout": 0, 01:15:08.440 "buf_cache_size": 4294967295, 01:15:08.440 "c2h_success": false, 01:15:08.440 "data_wr_pool_size": 0, 01:15:08.440 "dif_insert_or_strip": false, 01:15:08.440 "in_capsule_data_size": 4096, 01:15:08.440 "io_unit_size": 131072, 01:15:08.440 "max_aq_depth": 128, 01:15:08.440 "max_io_qpairs_per_ctrlr": 127, 01:15:08.440 "max_io_size": 131072, 01:15:08.440 "max_queue_depth": 128, 01:15:08.440 "num_shared_buffers": 511, 01:15:08.440 "sock_priority": 0, 01:15:08.440 "trtype": "TCP", 01:15:08.440 "zcopy": false 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "nvmf_create_subsystem", 01:15:08.440 "params": { 01:15:08.440 "allow_any_host": false, 01:15:08.440 "ana_reporting": false, 01:15:08.440 "max_cntlid": 65519, 01:15:08.440 "max_namespaces": 10, 01:15:08.440 "min_cntlid": 1, 01:15:08.440 "model_number": "SPDK bdev Controller", 01:15:08.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:08.440 "serial_number": "SPDK00000000000001" 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "nvmf_subsystem_add_host", 01:15:08.440 "params": { 01:15:08.440 "host": "nqn.2016-06.io.spdk:host1", 01:15:08.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:08.440 "psk": "/tmp/tmp.NMZuEDqCJg" 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "nvmf_subsystem_add_ns", 01:15:08.440 "params": { 01:15:08.440 "namespace": { 01:15:08.440 "bdev_name": "malloc0", 01:15:08.440 "nguid": "56DB2DC1F4064E57BD883CD4450689D7", 01:15:08.440 "no_auto_visible": false, 01:15:08.440 "nsid": 1, 01:15:08.440 "uuid": "56db2dc1-f406-4e57-bd88-3cd4450689d7" 01:15:08.440 }, 01:15:08.440 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:15:08.440 } 01:15:08.440 }, 01:15:08.440 { 01:15:08.440 "method": "nvmf_subsystem_add_listener", 01:15:08.440 "params": { 01:15:08.440 "listen_address": { 01:15:08.440 "adrfam": "IPv4", 01:15:08.440 "traddr": "10.0.0.2", 01:15:08.440 "trsvcid": "4420", 01:15:08.440 "trtype": "TCP" 01:15:08.440 }, 01:15:08.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:08.440 "secure_channel": true 01:15:08.440 } 01:15:08.440 } 01:15:08.440 ] 01:15:08.440 } 01:15:08.440 ] 01:15:08.440 }' 01:15:08.440 11:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:15:08.698 11:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 01:15:08.698 "subsystems": [ 01:15:08.698 { 01:15:08.698 "subsystem": "keyring", 01:15:08.698 "config": [] 01:15:08.698 }, 01:15:08.698 { 01:15:08.698 "subsystem": "iobuf", 01:15:08.698 "config": [ 01:15:08.698 { 01:15:08.698 "method": "iobuf_set_options", 01:15:08.698 "params": { 01:15:08.698 "large_bufsize": 135168, 01:15:08.698 "large_pool_count": 1024, 01:15:08.698 "small_bufsize": 8192, 01:15:08.699 "small_pool_count": 8192 01:15:08.699 } 01:15:08.699 } 01:15:08.699 ] 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "subsystem": "sock", 01:15:08.699 "config": [ 01:15:08.699 { 01:15:08.699 "method": "sock_set_default_impl", 01:15:08.699 "params": { 01:15:08.699 "impl_name": "posix" 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "sock_impl_set_options", 01:15:08.699 "params": { 01:15:08.699 "enable_ktls": false, 01:15:08.699 "enable_placement_id": 0, 01:15:08.699 "enable_quickack": false, 01:15:08.699 "enable_recv_pipe": true, 01:15:08.699 "enable_zerocopy_send_client": false, 01:15:08.699 "enable_zerocopy_send_server": true, 01:15:08.699 "impl_name": "ssl", 01:15:08.699 "recv_buf_size": 4096, 01:15:08.699 "send_buf_size": 4096, 01:15:08.699 "tls_version": 0, 01:15:08.699 "zerocopy_threshold": 0 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "sock_impl_set_options", 01:15:08.699 "params": { 01:15:08.699 "enable_ktls": false, 01:15:08.699 "enable_placement_id": 0, 01:15:08.699 "enable_quickack": false, 01:15:08.699 "enable_recv_pipe": true, 01:15:08.699 "enable_zerocopy_send_client": false, 01:15:08.699 "enable_zerocopy_send_server": true, 01:15:08.699 "impl_name": "posix", 01:15:08.699 "recv_buf_size": 2097152, 01:15:08.699 "send_buf_size": 2097152, 01:15:08.699 "tls_version": 0, 01:15:08.699 "zerocopy_threshold": 0 01:15:08.699 } 01:15:08.699 } 01:15:08.699 ] 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "subsystem": "vmd", 01:15:08.699 "config": [] 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "subsystem": "accel", 01:15:08.699 "config": [ 01:15:08.699 { 01:15:08.699 "method": "accel_set_options", 01:15:08.699 "params": { 01:15:08.699 "buf_count": 2048, 01:15:08.699 "large_cache_size": 16, 01:15:08.699 "sequence_count": 2048, 01:15:08.699 "small_cache_size": 128, 01:15:08.699 "task_count": 2048 01:15:08.699 } 01:15:08.699 } 01:15:08.699 ] 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "subsystem": "bdev", 01:15:08.699 "config": [ 01:15:08.699 { 01:15:08.699 "method": "bdev_set_options", 01:15:08.699 "params": { 01:15:08.699 "bdev_auto_examine": true, 01:15:08.699 "bdev_io_cache_size": 256, 01:15:08.699 "bdev_io_pool_size": 65535, 01:15:08.699 "iobuf_large_cache_size": 16, 01:15:08.699 "iobuf_small_cache_size": 128 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "bdev_raid_set_options", 01:15:08.699 "params": { 01:15:08.699 "process_max_bandwidth_mb_sec": 0, 01:15:08.699 "process_window_size_kb": 1024 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "bdev_iscsi_set_options", 01:15:08.699 "params": { 01:15:08.699 "timeout_sec": 30 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "bdev_nvme_set_options", 01:15:08.699 "params": { 01:15:08.699 "action_on_timeout": "none", 01:15:08.699 "allow_accel_sequence": false, 01:15:08.699 "arbitration_burst": 0, 01:15:08.699 "bdev_retry_count": 3, 01:15:08.699 "ctrlr_loss_timeout_sec": 0, 01:15:08.699 "delay_cmd_submit": true, 01:15:08.699 "dhchap_dhgroups": [ 01:15:08.699 "null", 01:15:08.699 "ffdhe2048", 01:15:08.699 "ffdhe3072", 01:15:08.699 "ffdhe4096", 01:15:08.699 "ffdhe6144", 01:15:08.699 "ffdhe8192" 01:15:08.699 ], 01:15:08.699 "dhchap_digests": [ 01:15:08.699 "sha256", 01:15:08.699 "sha384", 01:15:08.699 "sha512" 01:15:08.699 ], 01:15:08.699 "disable_auto_failback": false, 01:15:08.699 "fast_io_fail_timeout_sec": 0, 01:15:08.699 "generate_uuids": false, 01:15:08.699 "high_priority_weight": 0, 01:15:08.699 "io_path_stat": false, 01:15:08.699 "io_queue_requests": 512, 01:15:08.699 "keep_alive_timeout_ms": 10000, 01:15:08.699 "low_priority_weight": 0, 01:15:08.699 "medium_priority_weight": 0, 01:15:08.699 "nvme_adminq_poll_period_us": 10000, 01:15:08.699 "nvme_error_stat": false, 01:15:08.699 "nvme_ioq_poll_period_us": 0, 01:15:08.699 "rdma_cm_event_timeout_ms": 0, 01:15:08.699 "rdma_max_cq_size": 0, 01:15:08.699 "rdma_srq_size": 0, 01:15:08.699 "reconnect_delay_sec": 0, 01:15:08.699 "timeout_admin_us": 0, 01:15:08.699 "timeout_us": 0, 01:15:08.699 "transport_ack_timeout": 0, 01:15:08.699 "transport_retry_count": 4, 01:15:08.699 "transport_tos": 0 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "bdev_nvme_attach_controller", 01:15:08.699 "params": { 01:15:08.699 "adrfam": "IPv4", 01:15:08.699 "ctrlr_loss_timeout_sec": 0, 01:15:08.699 "ddgst": false, 01:15:08.699 "fast_io_fail_timeout_sec": 0, 01:15:08.699 "hdgst": false, 01:15:08.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:15:08.699 "name": "TLSTEST", 01:15:08.699 "prchk_guard": false, 01:15:08.699 "prchk_reftag": false, 01:15:08.699 "psk": "/tmp/tmp.NMZuEDqCJg", 01:15:08.699 "reconnect_delay_sec": 0, 01:15:08.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:15:08.699 "traddr": "10.0.0.2", 01:15:08.699 "trsvcid": "4420", 01:15:08.699 "trtype": "TCP" 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "bdev_nvme_set_hotplug", 01:15:08.699 "params": { 01:15:08.699 "enable": false, 01:15:08.699 "period_us": 100000 01:15:08.699 } 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "method": "bdev_wait_for_examine" 01:15:08.699 } 01:15:08.699 ] 01:15:08.699 }, 01:15:08.699 { 01:15:08.699 "subsystem": "nbd", 01:15:08.699 "config": [] 01:15:08.699 } 01:15:08.699 ] 01:15:08.699 }' 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 100756 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100756 ']' 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100756 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100756 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:15:08.699 killing process with pid 100756 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100756' 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100756 01:15:08.699 11:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100756 01:15:08.699 Received shutdown signal, test time was about 10.000000 seconds 01:15:08.699 01:15:08.699 Latency(us) 01:15:08.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:08.699 =================================================================================================================== 01:15:08.699 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:15:08.699 [2024-07-22 11:12:13.805489] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 100659 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100659 ']' 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100659 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100659 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:08.972 killing process with pid 100659 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100659' 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100659 01:15:08.972 [2024-07-22 11:12:14.116188] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:15:08.972 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100659 01:15:09.240 11:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 01:15:09.240 11:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 01:15:09.240 "subsystems": [ 01:15:09.240 { 01:15:09.240 "subsystem": "keyring", 01:15:09.240 "config": [] 01:15:09.240 }, 01:15:09.240 { 01:15:09.240 "subsystem": "iobuf", 01:15:09.240 "config": [ 01:15:09.240 { 01:15:09.240 "method": "iobuf_set_options", 01:15:09.240 "params": { 01:15:09.240 "large_bufsize": 135168, 01:15:09.240 "large_pool_count": 1024, 01:15:09.240 "small_bufsize": 8192, 01:15:09.240 "small_pool_count": 8192 01:15:09.240 } 01:15:09.240 } 01:15:09.240 ] 01:15:09.240 }, 01:15:09.240 { 01:15:09.240 "subsystem": "sock", 01:15:09.240 "config": [ 01:15:09.240 { 01:15:09.240 "method": "sock_set_default_impl", 01:15:09.240 "params": { 01:15:09.240 "impl_name": "posix" 01:15:09.240 } 01:15:09.240 }, 01:15:09.240 { 01:15:09.240 "method": "sock_impl_set_options", 01:15:09.240 "params": { 01:15:09.240 "enable_ktls": false, 01:15:09.240 "enable_placement_id": 0, 01:15:09.240 "enable_quickack": false, 01:15:09.240 "enable_recv_pipe": true, 01:15:09.240 "enable_zerocopy_send_client": false, 01:15:09.240 "enable_zerocopy_send_server": true, 01:15:09.240 "impl_name": "ssl", 01:15:09.240 "recv_buf_size": 4096, 01:15:09.240 "send_buf_size": 4096, 01:15:09.240 "tls_version": 0, 01:15:09.240 "zerocopy_threshold": 0 01:15:09.240 } 01:15:09.240 }, 01:15:09.240 { 01:15:09.240 "method": "sock_impl_set_options", 01:15:09.240 "params": { 01:15:09.240 "enable_ktls": false, 01:15:09.240 "enable_placement_id": 0, 01:15:09.240 "enable_quickack": false, 01:15:09.240 "enable_recv_pipe": true, 01:15:09.240 "enable_zerocopy_send_client": false, 01:15:09.240 "enable_zerocopy_send_server": true, 01:15:09.240 "impl_name": "posix", 01:15:09.240 "recv_buf_size": 2097152, 01:15:09.240 "send_buf_size": 2097152, 01:15:09.240 "tls_version": 0, 01:15:09.240 "zerocopy_threshold": 0 01:15:09.240 } 01:15:09.240 } 01:15:09.240 ] 01:15:09.240 }, 01:15:09.240 { 01:15:09.240 "subsystem": "vmd", 01:15:09.240 "config": [] 01:15:09.240 }, 01:15:09.240 { 01:15:09.240 "subsystem": "accel", 01:15:09.240 "config": [ 01:15:09.240 { 01:15:09.240 "method": "accel_set_options", 01:15:09.240 "params": { 01:15:09.240 "buf_count": 2048, 01:15:09.240 "large_cache_size": 16, 01:15:09.240 "sequence_count": 2048, 01:15:09.241 "small_cache_size": 128, 01:15:09.241 "task_count": 2048 01:15:09.241 } 01:15:09.241 } 01:15:09.241 ] 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "subsystem": "bdev", 01:15:09.241 "config": [ 01:15:09.241 { 01:15:09.241 "method": "bdev_set_options", 01:15:09.241 "params": { 01:15:09.241 "bdev_auto_examine": true, 01:15:09.241 "bdev_io_cache_size": 256, 01:15:09.241 "bdev_io_pool_size": 65535, 01:15:09.241 "iobuf_large_cache_size": 16, 01:15:09.241 "iobuf_small_cache_size": 128 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "bdev_raid_set_options", 01:15:09.241 "params": { 01:15:09.241 "process_max_bandwidth_mb_sec": 0, 01:15:09.241 "process_window_size_kb": 1024 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "bdev_iscsi_set_options", 01:15:09.241 "params": { 01:15:09.241 "timeout_sec": 30 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "bdev_nvme_set_options", 01:15:09.241 "params": { 01:15:09.241 "action_on_timeout": "none", 01:15:09.241 "allow_accel_sequence": false, 01:15:09.241 "arbitration_burst": 0, 01:15:09.241 "bdev_retry_count": 3, 01:15:09.241 "ctrlr_loss_timeout_sec": 0, 01:15:09.241 "delay_cmd_submit": true, 01:15:09.241 "dhchap_dhgroups": [ 01:15:09.241 "null", 01:15:09.241 "ffdhe2048", 01:15:09.241 "ffdhe3072", 01:15:09.241 "ffdhe4096", 01:15:09.241 "ffdhe6144", 01:15:09.241 "ffdhe8192" 01:15:09.241 ], 01:15:09.241 "dhchap_digests": [ 01:15:09.241 "sha256", 01:15:09.241 "sha384", 01:15:09.241 "sha512" 01:15:09.241 ], 01:15:09.241 "disable_auto_failback": false, 01:15:09.241 "fast_io_fail_timeout_sec": 0, 01:15:09.241 "generate_uuids": false, 01:15:09.241 "high_priority_weight": 0, 01:15:09.241 "io_path_stat": false, 01:15:09.241 "io_queue_requests": 0, 01:15:09.241 "keep_alive_timeout_ms": 10000, 01:15:09.241 "low_priority_weight": 0, 01:15:09.241 "medium_priority_weight": 0, 01:15:09.241 "nvme_adminq_poll_period_us": 10000, 01:15:09.241 "nvme_error_stat": false, 01:15:09.241 "nvme_ioq_poll_period_us": 0, 01:15:09.241 "rdma_cm_event_timeout_ms": 0, 01:15:09.241 "rdma_max_cq_size": 0, 01:15:09.241 "rdma_srq_size": 0, 01:15:09.241 "reconnect_delay_sec": 0, 01:15:09.241 "timeout_admin_us": 0, 01:15:09.241 "timeout_us": 0, 01:15:09.241 "transport_ack_timeout": 0, 01:15:09.241 "transport_retry_count": 4, 01:15:09.241 "transport_tos": 0 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "bdev_nvme_set_hotplug", 01:15:09.241 "params": { 01:15:09.241 "enable": false, 01:15:09.241 "period_us": 100000 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "bdev_malloc_create", 01:15:09.241 "params": { 01:15:09.241 "block_size": 4096, 01:15:09.241 "name": "malloc0", 01:15:09.241 "num_blocks": 8192, 01:15:09.241 "optimal_io_boundary": 0, 01:15:09.241 "physical_block_size": 4096, 01:15:09.241 "uuid": "56db2dc1-f406-4e57-bd88-3cd4450689d7" 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "bdev_wait_for_examine" 01:15:09.241 } 01:15:09.241 ] 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "subsystem": "nbd", 01:15:09.241 "config": [] 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "subsystem": "scheduler", 01:15:09.241 "config": [ 01:15:09.241 { 01:15:09.241 "method": "framework_set_scheduler", 01:15:09.241 "params": { 01:15:09.241 "name": "static" 01:15:09.241 } 01:15:09.241 } 01:15:09.241 ] 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "subsystem": "nvmf", 01:15:09.241 "config": [ 01:15:09.241 { 01:15:09.241 "method": "nvmf_set_config", 01:15:09.241 "params": { 01:15:09.241 "admin_cmd_passthru": { 01:15:09.241 "identify_ctrlr": false 01:15:09.241 }, 01:15:09.241 "discovery_filter": "match_any" 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "nvmf_set_max_subsystems", 01:15:09.241 "params": { 01:15:09.241 "max_subsystems": 1024 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "nvmf_set_crdt", 01:15:09.241 "params": { 01:15:09.241 "crdt1": 0, 01:15:09.241 "crdt2": 0, 01:15:09.241 "crdt3": 0 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "nvmf_create_transport", 01:15:09.241 "params": { 01:15:09.241 "abort_timeout_sec": 1, 01:15:09.241 "ack_timeout": 0, 01:15:09.241 "buf_cache_size": 4294967295, 01:15:09.241 "c2h_success": false, 01:15:09.241 "data_wr_pool_size": 0, 01:15:09.241 "dif_insert_or_strip": false, 01:15:09.241 "in_capsule_data_size": 4096, 01:15:09.241 "io_unit_size": 131072, 01:15:09.241 "max_aq_depth": 128, 01:15:09.241 "max_io_qpairs_per_ctrlr": 127, 01:15:09.241 "max_io_size": 131072, 01:15:09.241 "max_queue_depth": 128, 01:15:09.241 "num_shared_buffers": 511, 01:15:09.241 "sock_priority": 0, 01:15:09.241 "trtype": "TCP", 01:15:09.241 "zcopy": false 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "nvmf_create_subsystem", 01:15:09.241 "params": { 01:15:09.241 "allow_any_host": false, 01:15:09.241 "ana_reporting": false, 01:15:09.241 "max_cntlid": 65519, 01:15:09.241 "max_namespaces": 10, 01:15:09.241 "min_cntlid": 1, 01:15:09.241 "model_number": "SPDK bdev Controller", 01:15:09.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:09.241 "serial_number": "SPDK00000000000001" 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "nvmf_subsystem_add_host", 01:15:09.241 "params": { 01:15:09.241 "host": "nqn.2016-06.io.spdk:host1", 01:15:09.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:09.241 "psk": "/tmp/tmp.NMZuEDqCJg" 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "nvmf_subsystem_add_ns", 01:15:09.241 "params": { 01:15:09.241 "namespace": { 01:15:09.241 "bdev_name": "malloc0", 01:15:09.241 "nguid": "56DB2DC1F4064E57BD883CD4450689D7", 01:15:09.241 "no_auto_visible": false, 01:15:09.241 "nsid": 1, 01:15:09.241 "uuid": "56db2dc1-f406-4e57-bd88-3cd4450689d7" 01:15:09.241 }, 01:15:09.241 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:15:09.241 } 01:15:09.241 }, 01:15:09.241 { 01:15:09.241 "method": "nvmf_subsystem_add_listener", 01:15:09.241 "params": { 01:15:09.241 "listen_address": { 01:15:09.241 "adrfam": "IPv4", 01:15:09.241 "traddr": "10.0.0.2", 01:15:09.241 "trsvcid": "4420", 01:15:09.241 "trtype": "TCP" 01:15:09.241 }, 01:15:09.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:09.241 "secure_channel": true 01:15:09.241 } 01:15:09.241 } 01:15:09.241 ] 01:15:09.241 } 01:15:09.241 ] 01:15:09.241 }' 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100835 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100835 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100835 ']' 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:09.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:09.241 11:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:09.241 [2024-07-22 11:12:14.375436] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:09.241 [2024-07-22 11:12:14.375533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:09.521 [2024-07-22 11:12:14.515589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:09.521 [2024-07-22 11:12:14.575447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:09.521 [2024-07-22 11:12:14.575504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:09.521 [2024-07-22 11:12:14.575514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:09.521 [2024-07-22 11:12:14.575521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:09.521 [2024-07-22 11:12:14.575527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:09.521 [2024-07-22 11:12:14.575608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:09.779 [2024-07-22 11:12:14.793963] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:09.779 [2024-07-22 11:12:14.809910] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:15:09.779 [2024-07-22 11:12:14.825985] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:09.779 [2024-07-22 11:12:14.826221] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=100879 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 100879 /var/tmp/bdevperf.sock 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100879 ']' 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:10.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:10.345 11:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 01:15:10.345 "subsystems": [ 01:15:10.345 { 01:15:10.345 "subsystem": "keyring", 01:15:10.345 "config": [] 01:15:10.345 }, 01:15:10.345 { 01:15:10.345 "subsystem": "iobuf", 01:15:10.345 "config": [ 01:15:10.345 { 01:15:10.345 "method": "iobuf_set_options", 01:15:10.345 "params": { 01:15:10.345 "large_bufsize": 135168, 01:15:10.345 "large_pool_count": 1024, 01:15:10.345 "small_bufsize": 8192, 01:15:10.345 "small_pool_count": 8192 01:15:10.345 } 01:15:10.345 } 01:15:10.345 ] 01:15:10.345 }, 01:15:10.345 { 01:15:10.345 "subsystem": "sock", 01:15:10.345 "config": [ 01:15:10.345 { 01:15:10.345 "method": "sock_set_default_impl", 01:15:10.345 "params": { 01:15:10.345 "impl_name": "posix" 01:15:10.345 } 01:15:10.345 }, 01:15:10.345 { 01:15:10.345 "method": "sock_impl_set_options", 01:15:10.345 "params": { 01:15:10.345 "enable_ktls": false, 01:15:10.345 "enable_placement_id": 0, 01:15:10.345 "enable_quickack": false, 01:15:10.345 "enable_recv_pipe": true, 01:15:10.345 "enable_zerocopy_send_client": false, 01:15:10.345 "enable_zerocopy_send_server": true, 01:15:10.345 "impl_name": "ssl", 01:15:10.345 "recv_buf_size": 4096, 01:15:10.345 "send_buf_size": 4096, 01:15:10.345 "tls_version": 0, 01:15:10.345 "zerocopy_threshold": 0 01:15:10.345 } 01:15:10.345 }, 01:15:10.345 { 01:15:10.345 "method": "sock_impl_set_options", 01:15:10.345 "params": { 01:15:10.345 "enable_ktls": false, 01:15:10.345 "enable_placement_id": 0, 01:15:10.345 "enable_quickack": false, 01:15:10.345 "enable_recv_pipe": true, 01:15:10.345 "enable_zerocopy_send_client": false, 01:15:10.345 "enable_zerocopy_send_server": true, 01:15:10.345 "impl_name": "posix", 01:15:10.345 "recv_buf_size": 2097152, 01:15:10.345 "send_buf_size": 2097152, 01:15:10.345 "tls_version": 0, 01:15:10.345 "zerocopy_threshold": 0 01:15:10.345 } 01:15:10.345 } 01:15:10.345 ] 01:15:10.345 }, 01:15:10.345 { 01:15:10.345 "subsystem": "vmd", 01:15:10.345 "config": [] 01:15:10.345 }, 01:15:10.345 { 01:15:10.345 "subsystem": "accel", 01:15:10.345 "config": [ 01:15:10.345 { 01:15:10.345 "method": "accel_set_options", 01:15:10.345 "params": { 01:15:10.345 "buf_count": 2048, 01:15:10.345 "large_cache_size": 16, 01:15:10.345 "sequence_count": 2048, 01:15:10.345 "small_cache_size": 128, 01:15:10.345 "task_count": 2048 01:15:10.345 } 01:15:10.345 } 01:15:10.345 ] 01:15:10.345 }, 01:15:10.345 { 01:15:10.345 "subsystem": "bdev", 01:15:10.345 "config": [ 01:15:10.345 { 01:15:10.345 "method": "bdev_set_options", 01:15:10.345 "params": { 01:15:10.345 "bdev_auto_examine": true, 01:15:10.345 "bdev_io_cache_size": 256, 01:15:10.345 "bdev_io_pool_size": 65535, 01:15:10.346 "iobuf_large_cache_size": 16, 01:15:10.346 "iobuf_small_cache_size": 128 01:15:10.346 } 01:15:10.346 }, 01:15:10.346 { 01:15:10.346 "method": "bdev_raid_set_options", 01:15:10.346 "params": { 01:15:10.346 "process_max_bandwidth_mb_sec": 0, 01:15:10.346 "process_window_size_kb": 1024 01:15:10.346 } 01:15:10.346 }, 01:15:10.346 { 01:15:10.346 "method": "bdev_iscsi_set_options", 01:15:10.346 "params": { 01:15:10.346 "timeout_sec": 30 01:15:10.346 } 01:15:10.346 }, 01:15:10.346 { 01:15:10.346 "method": "bdev_nvme_set_options", 01:15:10.346 "params": { 01:15:10.346 "action_on_timeout": "none", 01:15:10.346 "allow_accel_sequence": false, 01:15:10.346 "arbitration_burst": 0, 01:15:10.346 "bdev_retry_count": 3, 01:15:10.346 "ctrlr_loss_timeout_sec": 0, 01:15:10.346 "delay_cmd_submit": true, 01:15:10.346 "dhchap_dhgroups": [ 01:15:10.346 "null", 01:15:10.346 "ffdhe2048", 01:15:10.346 "ffdhe3072", 01:15:10.346 "ffdhe4096", 01:15:10.346 "ffdhe6144", 01:15:10.346 "ffdhe8192" 01:15:10.346 ], 01:15:10.346 "dhchap_digests": [ 01:15:10.346 "sha256", 01:15:10.346 "sha384", 01:15:10.346 "sha512" 01:15:10.346 ], 01:15:10.346 "disable_auto_failback": false, 01:15:10.346 "fast_io_fail_timeout_sec": 0, 01:15:10.346 "generate_uuids": false, 01:15:10.346 "high_priority_weight": 0, 01:15:10.346 "io_path_stat": false, 01:15:10.346 "io_queue_requests": 512, 01:15:10.346 "keep_alive_timeout_ms": 10000, 01:15:10.346 "low_priority_weight": 0, 01:15:10.346 "medium_priority_weight": 0, 01:15:10.346 "nvme_adminq_poll_period_us": 10000, 01:15:10.346 "nvme_error_stat": false, 01:15:10.346 "nvme_ioq_poll_period_us": 0, 01:15:10.346 "rdma_cm_event_timeout_ms": 0, 01:15:10.346 "rdma_max_cq_size": 0, 01:15:10.346 "rdma_srq_size": 0, 01:15:10.346 "reconnect_delay_sec": 0, 01:15:10.346 "timeout_admin_us": 0, 01:15:10.346 "timeout_us": 0, 01:15:10.346 "transport_ack_timeout": 0, 01:15:10.346 "transport_retry_count": 4, 01:15:10.346 "transport_tos": 0 01:15:10.346 } 01:15:10.346 }, 01:15:10.346 { 01:15:10.346 "method": "bdev_nvme_attach_controller", 01:15:10.346 "params": { 01:15:10.346 "adrfam": "IPv4", 01:15:10.346 "ctrlr_loss_timeout_sec": 0, 01:15:10.346 "ddgst": false, 01:15:10.346 "fast_io_fail_timeout_sec": 0, 01:15:10.346 "hdgst": false, 01:15:10.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:15:10.346 "name": "TLSTEST", 01:15:10.346 "prchk_guard": false, 01:15:10.346 "prchk_reftag": false, 01:15:10.346 "psk": "/tmp/tmp.NMZuEDqCJg", 01:15:10.346 "reconnect_delay_sec": 0, 01:15:10.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:15:10.346 "traddr": "10.0.0.2", 01:15:10.346 "trsvcid": "4420", 01:15:10.346 "trtype": "TCP" 01:15:10.346 } 01:15:10.346 }, 01:15:10.346 { 01:15:10.346 "method": "bdev_nvme_set_hotplug", 01:15:10.346 "params": { 01:15:10.346 "enable": false, 01:15:10.346 "period_us": 100000 01:15:10.346 } 01:15:10.346 }, 01:15:10.346 { 01:15:10.346 "method": "bdev_wait_for_examine" 01:15:10.346 } 01:15:10.346 ] 01:15:10.346 }, 01:15:10.346 { 01:15:10.346 "subsystem": "nbd", 01:15:10.346 "config": [] 01:15:10.346 } 01:15:10.346 ] 01:15:10.346 }' 01:15:10.346 11:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 01:15:10.346 [2024-07-22 11:12:15.339014] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:10.346 [2024-07-22 11:12:15.339096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100879 ] 01:15:10.346 [2024-07-22 11:12:15.469366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:10.346 [2024-07-22 11:12:15.539319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:15:10.604 [2024-07-22 11:12:15.723169] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:10.604 [2024-07-22 11:12:15.723329] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:15:11.171 11:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:11.171 11:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:11.171 11:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:15:11.171 Running I/O for 10 seconds... 01:15:23.365 01:15:23.365 Latency(us) 01:15:23.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:23.365 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:15:23.365 Verification LBA range: start 0x0 length 0x2000 01:15:23.365 TLSTESTn1 : 10.02 4531.37 17.70 0.00 0.00 28196.44 6225.92 22639.71 01:15:23.365 =================================================================================================================== 01:15:23.365 Total : 4531.37 17.70 0.00 0.00 28196.44 6225.92 22639.71 01:15:23.365 0 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 100879 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100879 ']' 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100879 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100879 01:15:23.365 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:15:23.366 killing process with pid 100879 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100879' 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100879 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100879 01:15:23.366 Received shutdown signal, test time was about 10.000000 seconds 01:15:23.366 01:15:23.366 Latency(us) 01:15:23.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:23.366 =================================================================================================================== 01:15:23.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:23.366 [2024-07-22 11:12:26.397200] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 100835 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100835 ']' 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100835 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100835 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:23.366 killing process with pid 100835 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100835' 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100835 01:15:23.366 [2024-07-22 11:12:26.687181] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100835 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101024 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101024 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101024 ']' 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:23.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:23.366 11:12:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:23.366 [2024-07-22 11:12:26.941915] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:23.366 [2024-07-22 11:12:26.942028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:23.366 [2024-07-22 11:12:27.084834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:23.366 [2024-07-22 11:12:27.168564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:23.366 [2024-07-22 11:12:27.168651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:23.366 [2024-07-22 11:12:27.168676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:23.366 [2024-07-22 11:12:27.168687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:23.366 [2024-07-22 11:12:27.168697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:23.366 [2024-07-22 11:12:27.168731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.NMZuEDqCJg 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NMZuEDqCJg 01:15:23.366 11:12:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:15:23.366 [2024-07-22 11:12:28.109587] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:23.366 11:12:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:15:23.366 11:12:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 01:15:23.366 [2024-07-22 11:12:28.553667] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:23.366 [2024-07-22 11:12:28.553953] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:23.366 11:12:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:15:23.625 malloc0 01:15:23.625 11:12:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:15:23.883 11:12:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMZuEDqCJg 01:15:24.142 [2024-07-22 11:12:29.131473] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=101121 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 101121 /var/tmp/bdevperf.sock 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101121 ']' 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:24.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:24.142 11:12:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:24.142 [2024-07-22 11:12:29.184488] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:24.142 [2024-07-22 11:12:29.184548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101121 ] 01:15:24.142 [2024-07-22 11:12:29.311688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:24.401 [2024-07-22 11:12:29.377886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:24.967 11:12:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:24.967 11:12:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:24.967 11:12:30 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NMZuEDqCJg 01:15:25.226 11:12:30 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:15:25.484 [2024-07-22 11:12:30.557254] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:25.484 nvme0n1 01:15:25.484 11:12:30 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:15:25.742 Running I/O for 1 seconds... 01:15:26.681 01:15:26.681 Latency(us) 01:15:26.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:26.681 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:15:26.681 Verification LBA range: start 0x0 length 0x2000 01:15:26.681 nvme0n1 : 1.01 4674.62 18.26 0.00 0.00 27116.98 279.27 17992.61 01:15:26.681 =================================================================================================================== 01:15:26.681 Total : 4674.62 18.26 0.00 0.00 27116.98 279.27 17992.61 01:15:26.681 0 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 101121 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101121 ']' 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101121 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101121 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:26.681 killing process with pid 101121 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101121' 01:15:26.681 Received shutdown signal, test time was about 1.000000 seconds 01:15:26.681 01:15:26.681 Latency(us) 01:15:26.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:26.681 =================================================================================================================== 01:15:26.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101121 01:15:26.681 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101121 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 101024 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101024 ']' 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101024 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101024 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:26.940 killing process with pid 101024 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101024' 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101024 01:15:26.940 [2024-07-22 11:12:31.995910] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:15:26.940 11:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101024 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101191 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101191 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101191 ']' 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:27.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:27.199 11:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:27.199 [2024-07-22 11:12:32.325736] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:27.199 [2024-07-22 11:12:32.325825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:27.458 [2024-07-22 11:12:32.456863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:27.458 [2024-07-22 11:12:32.527820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:27.458 [2024-07-22 11:12:32.527884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:27.458 [2024-07-22 11:12:32.527894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:27.458 [2024-07-22 11:12:32.527901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:27.458 [2024-07-22 11:12:32.527908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:27.458 [2024-07-22 11:12:32.527938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:28.023 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:28.023 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:28.023 11:12:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:28.023 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:28.023 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:28.281 [2024-07-22 11:12:33.259511] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:28.281 malloc0 01:15:28.281 [2024-07-22 11:12:33.293035] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:28.281 [2024-07-22 11:12:33.293241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=101241 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 101241 /var/tmp/bdevperf.sock 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101241 ']' 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:28.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:28.281 11:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:15:28.281 [2024-07-22 11:12:33.382874] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:28.281 [2024-07-22 11:12:33.382991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101241 ] 01:15:28.539 [2024-07-22 11:12:33.515909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:28.539 [2024-07-22 11:12:33.572646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:28.539 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:28.539 11:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:28.539 11:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NMZuEDqCJg 01:15:28.796 11:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:15:29.053 [2024-07-22 11:12:34.130043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:29.053 nvme0n1 01:15:29.053 11:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:15:29.310 Running I/O for 1 seconds... 01:15:30.241 01:15:30.241 Latency(us) 01:15:30.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:30.241 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:15:30.241 Verification LBA range: start 0x0 length 0x2000 01:15:30.241 nvme0n1 : 1.01 4720.88 18.44 0.00 0.00 26868.77 5898.24 21448.15 01:15:30.241 =================================================================================================================== 01:15:30.241 Total : 4720.88 18.44 0.00 0.00 26868.77 5898.24 21448.15 01:15:30.241 0 01:15:30.241 11:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 01:15:30.241 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:30.241 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:30.499 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:30.499 11:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 01:15:30.499 "subsystems": [ 01:15:30.499 { 01:15:30.499 "subsystem": "keyring", 01:15:30.499 "config": [ 01:15:30.499 { 01:15:30.499 "method": "keyring_file_add_key", 01:15:30.499 "params": { 01:15:30.499 "name": "key0", 01:15:30.499 "path": "/tmp/tmp.NMZuEDqCJg" 01:15:30.499 } 01:15:30.499 } 01:15:30.499 ] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "iobuf", 01:15:30.499 "config": [ 01:15:30.499 { 01:15:30.499 "method": "iobuf_set_options", 01:15:30.499 "params": { 01:15:30.499 "large_bufsize": 135168, 01:15:30.499 "large_pool_count": 1024, 01:15:30.499 "small_bufsize": 8192, 01:15:30.499 "small_pool_count": 8192 01:15:30.499 } 01:15:30.499 } 01:15:30.499 ] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "sock", 01:15:30.499 "config": [ 01:15:30.499 { 01:15:30.499 "method": "sock_set_default_impl", 01:15:30.499 "params": { 01:15:30.499 "impl_name": "posix" 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "sock_impl_set_options", 01:15:30.499 "params": { 01:15:30.499 "enable_ktls": false, 01:15:30.499 "enable_placement_id": 0, 01:15:30.499 "enable_quickack": false, 01:15:30.499 "enable_recv_pipe": true, 01:15:30.499 "enable_zerocopy_send_client": false, 01:15:30.499 "enable_zerocopy_send_server": true, 01:15:30.499 "impl_name": "ssl", 01:15:30.499 "recv_buf_size": 4096, 01:15:30.499 "send_buf_size": 4096, 01:15:30.499 "tls_version": 0, 01:15:30.499 "zerocopy_threshold": 0 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "sock_impl_set_options", 01:15:30.499 "params": { 01:15:30.499 "enable_ktls": false, 01:15:30.499 "enable_placement_id": 0, 01:15:30.499 "enable_quickack": false, 01:15:30.499 "enable_recv_pipe": true, 01:15:30.499 "enable_zerocopy_send_client": false, 01:15:30.499 "enable_zerocopy_send_server": true, 01:15:30.499 "impl_name": "posix", 01:15:30.499 "recv_buf_size": 2097152, 01:15:30.499 "send_buf_size": 2097152, 01:15:30.499 "tls_version": 0, 01:15:30.499 "zerocopy_threshold": 0 01:15:30.499 } 01:15:30.499 } 01:15:30.499 ] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "vmd", 01:15:30.499 "config": [] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "accel", 01:15:30.499 "config": [ 01:15:30.499 { 01:15:30.499 "method": "accel_set_options", 01:15:30.499 "params": { 01:15:30.499 "buf_count": 2048, 01:15:30.499 "large_cache_size": 16, 01:15:30.499 "sequence_count": 2048, 01:15:30.499 "small_cache_size": 128, 01:15:30.499 "task_count": 2048 01:15:30.499 } 01:15:30.499 } 01:15:30.499 ] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "bdev", 01:15:30.499 "config": [ 01:15:30.499 { 01:15:30.499 "method": "bdev_set_options", 01:15:30.499 "params": { 01:15:30.499 "bdev_auto_examine": true, 01:15:30.499 "bdev_io_cache_size": 256, 01:15:30.499 "bdev_io_pool_size": 65535, 01:15:30.499 "iobuf_large_cache_size": 16, 01:15:30.499 "iobuf_small_cache_size": 128 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "bdev_raid_set_options", 01:15:30.499 "params": { 01:15:30.499 "process_max_bandwidth_mb_sec": 0, 01:15:30.499 "process_window_size_kb": 1024 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "bdev_iscsi_set_options", 01:15:30.499 "params": { 01:15:30.499 "timeout_sec": 30 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "bdev_nvme_set_options", 01:15:30.499 "params": { 01:15:30.499 "action_on_timeout": "none", 01:15:30.499 "allow_accel_sequence": false, 01:15:30.499 "arbitration_burst": 0, 01:15:30.499 "bdev_retry_count": 3, 01:15:30.499 "ctrlr_loss_timeout_sec": 0, 01:15:30.499 "delay_cmd_submit": true, 01:15:30.499 "dhchap_dhgroups": [ 01:15:30.499 "null", 01:15:30.499 "ffdhe2048", 01:15:30.499 "ffdhe3072", 01:15:30.499 "ffdhe4096", 01:15:30.499 "ffdhe6144", 01:15:30.499 "ffdhe8192" 01:15:30.499 ], 01:15:30.499 "dhchap_digests": [ 01:15:30.499 "sha256", 01:15:30.499 "sha384", 01:15:30.499 "sha512" 01:15:30.499 ], 01:15:30.499 "disable_auto_failback": false, 01:15:30.499 "fast_io_fail_timeout_sec": 0, 01:15:30.499 "generate_uuids": false, 01:15:30.499 "high_priority_weight": 0, 01:15:30.499 "io_path_stat": false, 01:15:30.499 "io_queue_requests": 0, 01:15:30.499 "keep_alive_timeout_ms": 10000, 01:15:30.499 "low_priority_weight": 0, 01:15:30.499 "medium_priority_weight": 0, 01:15:30.499 "nvme_adminq_poll_period_us": 10000, 01:15:30.499 "nvme_error_stat": false, 01:15:30.499 "nvme_ioq_poll_period_us": 0, 01:15:30.499 "rdma_cm_event_timeout_ms": 0, 01:15:30.499 "rdma_max_cq_size": 0, 01:15:30.499 "rdma_srq_size": 0, 01:15:30.499 "reconnect_delay_sec": 0, 01:15:30.499 "timeout_admin_us": 0, 01:15:30.499 "timeout_us": 0, 01:15:30.499 "transport_ack_timeout": 0, 01:15:30.499 "transport_retry_count": 4, 01:15:30.499 "transport_tos": 0 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "bdev_nvme_set_hotplug", 01:15:30.499 "params": { 01:15:30.499 "enable": false, 01:15:30.499 "period_us": 100000 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "bdev_malloc_create", 01:15:30.499 "params": { 01:15:30.499 "block_size": 4096, 01:15:30.499 "name": "malloc0", 01:15:30.499 "num_blocks": 8192, 01:15:30.499 "optimal_io_boundary": 0, 01:15:30.499 "physical_block_size": 4096, 01:15:30.499 "uuid": "2a05cb3a-3cc7-4d99-933d-8d7c028bb30b" 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "bdev_wait_for_examine" 01:15:30.499 } 01:15:30.499 ] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "nbd", 01:15:30.499 "config": [] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "scheduler", 01:15:30.499 "config": [ 01:15:30.499 { 01:15:30.499 "method": "framework_set_scheduler", 01:15:30.499 "params": { 01:15:30.499 "name": "static" 01:15:30.499 } 01:15:30.499 } 01:15:30.499 ] 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "subsystem": "nvmf", 01:15:30.499 "config": [ 01:15:30.499 { 01:15:30.499 "method": "nvmf_set_config", 01:15:30.499 "params": { 01:15:30.499 "admin_cmd_passthru": { 01:15:30.499 "identify_ctrlr": false 01:15:30.499 }, 01:15:30.499 "discovery_filter": "match_any" 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "nvmf_set_max_subsystems", 01:15:30.499 "params": { 01:15:30.499 "max_subsystems": 1024 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "nvmf_set_crdt", 01:15:30.499 "params": { 01:15:30.499 "crdt1": 0, 01:15:30.499 "crdt2": 0, 01:15:30.499 "crdt3": 0 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "nvmf_create_transport", 01:15:30.499 "params": { 01:15:30.499 "abort_timeout_sec": 1, 01:15:30.499 "ack_timeout": 0, 01:15:30.499 "buf_cache_size": 4294967295, 01:15:30.499 "c2h_success": false, 01:15:30.499 "data_wr_pool_size": 0, 01:15:30.499 "dif_insert_or_strip": false, 01:15:30.499 "in_capsule_data_size": 4096, 01:15:30.499 "io_unit_size": 131072, 01:15:30.499 "max_aq_depth": 128, 01:15:30.499 "max_io_qpairs_per_ctrlr": 127, 01:15:30.499 "max_io_size": 131072, 01:15:30.499 "max_queue_depth": 128, 01:15:30.499 "num_shared_buffers": 511, 01:15:30.499 "sock_priority": 0, 01:15:30.499 "trtype": "TCP", 01:15:30.499 "zcopy": false 01:15:30.499 } 01:15:30.499 }, 01:15:30.499 { 01:15:30.499 "method": "nvmf_create_subsystem", 01:15:30.499 "params": { 01:15:30.500 "allow_any_host": false, 01:15:30.500 "ana_reporting": false, 01:15:30.500 "max_cntlid": 65519, 01:15:30.500 "max_namespaces": 32, 01:15:30.500 "min_cntlid": 1, 01:15:30.500 "model_number": "SPDK bdev Controller", 01:15:30.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:30.500 "serial_number": "00000000000000000000" 01:15:30.500 } 01:15:30.500 }, 01:15:30.500 { 01:15:30.500 "method": "nvmf_subsystem_add_host", 01:15:30.500 "params": { 01:15:30.500 "host": "nqn.2016-06.io.spdk:host1", 01:15:30.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:30.500 "psk": "key0" 01:15:30.500 } 01:15:30.500 }, 01:15:30.500 { 01:15:30.500 "method": "nvmf_subsystem_add_ns", 01:15:30.500 "params": { 01:15:30.500 "namespace": { 01:15:30.500 "bdev_name": "malloc0", 01:15:30.500 "nguid": "2A05CB3A3CC74D99933D8D7C028BB30B", 01:15:30.500 "no_auto_visible": false, 01:15:30.500 "nsid": 1, 01:15:30.500 "uuid": "2a05cb3a-3cc7-4d99-933d-8d7c028bb30b" 01:15:30.500 }, 01:15:30.500 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:15:30.500 } 01:15:30.500 }, 01:15:30.500 { 01:15:30.500 "method": "nvmf_subsystem_add_listener", 01:15:30.500 "params": { 01:15:30.500 "listen_address": { 01:15:30.500 "adrfam": "IPv4", 01:15:30.500 "traddr": "10.0.0.2", 01:15:30.500 "trsvcid": "4420", 01:15:30.500 "trtype": "TCP" 01:15:30.500 }, 01:15:30.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:30.500 "secure_channel": false, 01:15:30.500 "sock_impl": "ssl" 01:15:30.500 } 01:15:30.500 } 01:15:30.500 ] 01:15:30.500 } 01:15:30.500 ] 01:15:30.500 }' 01:15:30.500 11:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:15:30.758 11:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 01:15:30.758 "subsystems": [ 01:15:30.758 { 01:15:30.758 "subsystem": "keyring", 01:15:30.758 "config": [ 01:15:30.758 { 01:15:30.758 "method": "keyring_file_add_key", 01:15:30.758 "params": { 01:15:30.758 "name": "key0", 01:15:30.758 "path": "/tmp/tmp.NMZuEDqCJg" 01:15:30.758 } 01:15:30.758 } 01:15:30.758 ] 01:15:30.758 }, 01:15:30.758 { 01:15:30.758 "subsystem": "iobuf", 01:15:30.758 "config": [ 01:15:30.758 { 01:15:30.758 "method": "iobuf_set_options", 01:15:30.758 "params": { 01:15:30.758 "large_bufsize": 135168, 01:15:30.758 "large_pool_count": 1024, 01:15:30.758 "small_bufsize": 8192, 01:15:30.758 "small_pool_count": 8192 01:15:30.758 } 01:15:30.758 } 01:15:30.758 ] 01:15:30.758 }, 01:15:30.758 { 01:15:30.758 "subsystem": "sock", 01:15:30.758 "config": [ 01:15:30.758 { 01:15:30.758 "method": "sock_set_default_impl", 01:15:30.758 "params": { 01:15:30.758 "impl_name": "posix" 01:15:30.758 } 01:15:30.758 }, 01:15:30.758 { 01:15:30.758 "method": "sock_impl_set_options", 01:15:30.758 "params": { 01:15:30.758 "enable_ktls": false, 01:15:30.758 "enable_placement_id": 0, 01:15:30.758 "enable_quickack": false, 01:15:30.758 "enable_recv_pipe": true, 01:15:30.758 "enable_zerocopy_send_client": false, 01:15:30.758 "enable_zerocopy_send_server": true, 01:15:30.758 "impl_name": "ssl", 01:15:30.758 "recv_buf_size": 4096, 01:15:30.758 "send_buf_size": 4096, 01:15:30.758 "tls_version": 0, 01:15:30.758 "zerocopy_threshold": 0 01:15:30.758 } 01:15:30.758 }, 01:15:30.758 { 01:15:30.758 "method": "sock_impl_set_options", 01:15:30.758 "params": { 01:15:30.758 "enable_ktls": false, 01:15:30.758 "enable_placement_id": 0, 01:15:30.758 "enable_quickack": false, 01:15:30.758 "enable_recv_pipe": true, 01:15:30.758 "enable_zerocopy_send_client": false, 01:15:30.758 "enable_zerocopy_send_server": true, 01:15:30.758 "impl_name": "posix", 01:15:30.758 "recv_buf_size": 2097152, 01:15:30.758 "send_buf_size": 2097152, 01:15:30.758 "tls_version": 0, 01:15:30.758 "zerocopy_threshold": 0 01:15:30.758 } 01:15:30.758 } 01:15:30.758 ] 01:15:30.758 }, 01:15:30.758 { 01:15:30.758 "subsystem": "vmd", 01:15:30.758 "config": [] 01:15:30.758 }, 01:15:30.758 { 01:15:30.758 "subsystem": "accel", 01:15:30.758 "config": [ 01:15:30.758 { 01:15:30.758 "method": "accel_set_options", 01:15:30.758 "params": { 01:15:30.758 "buf_count": 2048, 01:15:30.758 "large_cache_size": 16, 01:15:30.758 "sequence_count": 2048, 01:15:30.758 "small_cache_size": 128, 01:15:30.758 "task_count": 2048 01:15:30.758 } 01:15:30.758 } 01:15:30.758 ] 01:15:30.758 }, 01:15:30.758 { 01:15:30.758 "subsystem": "bdev", 01:15:30.758 "config": [ 01:15:30.758 { 01:15:30.758 "method": "bdev_set_options", 01:15:30.758 "params": { 01:15:30.758 "bdev_auto_examine": true, 01:15:30.758 "bdev_io_cache_size": 256, 01:15:30.758 "bdev_io_pool_size": 65535, 01:15:30.758 "iobuf_large_cache_size": 16, 01:15:30.758 "iobuf_small_cache_size": 128 01:15:30.759 } 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "method": "bdev_raid_set_options", 01:15:30.759 "params": { 01:15:30.759 "process_max_bandwidth_mb_sec": 0, 01:15:30.759 "process_window_size_kb": 1024 01:15:30.759 } 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "method": "bdev_iscsi_set_options", 01:15:30.759 "params": { 01:15:30.759 "timeout_sec": 30 01:15:30.759 } 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "method": "bdev_nvme_set_options", 01:15:30.759 "params": { 01:15:30.759 "action_on_timeout": "none", 01:15:30.759 "allow_accel_sequence": false, 01:15:30.759 "arbitration_burst": 0, 01:15:30.759 "bdev_retry_count": 3, 01:15:30.759 "ctrlr_loss_timeout_sec": 0, 01:15:30.759 "delay_cmd_submit": true, 01:15:30.759 "dhchap_dhgroups": [ 01:15:30.759 "null", 01:15:30.759 "ffdhe2048", 01:15:30.759 "ffdhe3072", 01:15:30.759 "ffdhe4096", 01:15:30.759 "ffdhe6144", 01:15:30.759 "ffdhe8192" 01:15:30.759 ], 01:15:30.759 "dhchap_digests": [ 01:15:30.759 "sha256", 01:15:30.759 "sha384", 01:15:30.759 "sha512" 01:15:30.759 ], 01:15:30.759 "disable_auto_failback": false, 01:15:30.759 "fast_io_fail_timeout_sec": 0, 01:15:30.759 "generate_uuids": false, 01:15:30.759 "high_priority_weight": 0, 01:15:30.759 "io_path_stat": false, 01:15:30.759 "io_queue_requests": 512, 01:15:30.759 "keep_alive_timeout_ms": 10000, 01:15:30.759 "low_priority_weight": 0, 01:15:30.759 "medium_priority_weight": 0, 01:15:30.759 "nvme_adminq_poll_period_us": 10000, 01:15:30.759 "nvme_error_stat": false, 01:15:30.759 "nvme_ioq_poll_period_us": 0, 01:15:30.759 "rdma_cm_event_timeout_ms": 0, 01:15:30.759 "rdma_max_cq_size": 0, 01:15:30.759 "rdma_srq_size": 0, 01:15:30.759 "reconnect_delay_sec": 0, 01:15:30.759 "timeout_admin_us": 0, 01:15:30.759 "timeout_us": 0, 01:15:30.759 "transport_ack_timeout": 0, 01:15:30.759 "transport_retry_count": 4, 01:15:30.759 "transport_tos": 0 01:15:30.759 } 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "method": "bdev_nvme_attach_controller", 01:15:30.759 "params": { 01:15:30.759 "adrfam": "IPv4", 01:15:30.759 "ctrlr_loss_timeout_sec": 0, 01:15:30.759 "ddgst": false, 01:15:30.759 "fast_io_fail_timeout_sec": 0, 01:15:30.759 "hdgst": false, 01:15:30.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:15:30.759 "name": "nvme0", 01:15:30.759 "prchk_guard": false, 01:15:30.759 "prchk_reftag": false, 01:15:30.759 "psk": "key0", 01:15:30.759 "reconnect_delay_sec": 0, 01:15:30.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:15:30.759 "traddr": "10.0.0.2", 01:15:30.759 "trsvcid": "4420", 01:15:30.759 "trtype": "TCP" 01:15:30.759 } 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "method": "bdev_nvme_set_hotplug", 01:15:30.759 "params": { 01:15:30.759 "enable": false, 01:15:30.759 "period_us": 100000 01:15:30.759 } 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "method": "bdev_enable_histogram", 01:15:30.759 "params": { 01:15:30.759 "enable": true, 01:15:30.759 "name": "nvme0n1" 01:15:30.759 } 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "method": "bdev_wait_for_examine" 01:15:30.759 } 01:15:30.759 ] 01:15:30.759 }, 01:15:30.759 { 01:15:30.759 "subsystem": "nbd", 01:15:30.759 "config": [] 01:15:30.759 } 01:15:30.759 ] 01:15:30.759 }' 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 101241 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101241 ']' 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101241 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101241 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:30.759 killing process with pid 101241 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101241' 01:15:30.759 Received shutdown signal, test time was about 1.000000 seconds 01:15:30.759 01:15:30.759 Latency(us) 01:15:30.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:30.759 =================================================================================================================== 01:15:30.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101241 01:15:30.759 11:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101241 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 101191 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101191 ']' 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101191 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101191 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:31.017 killing process with pid 101191 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101191' 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101191 01:15:31.017 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101191 01:15:31.275 11:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 01:15:31.275 11:12:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:31.275 11:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 01:15:31.275 "subsystems": [ 01:15:31.275 { 01:15:31.275 "subsystem": "keyring", 01:15:31.275 "config": [ 01:15:31.275 { 01:15:31.275 "method": "keyring_file_add_key", 01:15:31.275 "params": { 01:15:31.275 "name": "key0", 01:15:31.275 "path": "/tmp/tmp.NMZuEDqCJg" 01:15:31.275 } 01:15:31.275 } 01:15:31.275 ] 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "subsystem": "iobuf", 01:15:31.275 "config": [ 01:15:31.275 { 01:15:31.275 "method": "iobuf_set_options", 01:15:31.275 "params": { 01:15:31.275 "large_bufsize": 135168, 01:15:31.275 "large_pool_count": 1024, 01:15:31.275 "small_bufsize": 8192, 01:15:31.275 "small_pool_count": 8192 01:15:31.275 } 01:15:31.275 } 01:15:31.275 ] 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "subsystem": "sock", 01:15:31.275 "config": [ 01:15:31.275 { 01:15:31.275 "method": "sock_set_default_impl", 01:15:31.275 "params": { 01:15:31.275 "impl_name": "posix" 01:15:31.275 } 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "method": "sock_impl_set_options", 01:15:31.275 "params": { 01:15:31.275 "enable_ktls": false, 01:15:31.275 "enable_placement_id": 0, 01:15:31.275 "enable_quickack": false, 01:15:31.275 "enable_recv_pipe": true, 01:15:31.275 "enable_zerocopy_send_client": false, 01:15:31.275 "enable_zerocopy_send_server": true, 01:15:31.275 "impl_name": "ssl", 01:15:31.275 "recv_buf_size": 4096, 01:15:31.275 "send_buf_size": 4096, 01:15:31.275 "tls_version": 0, 01:15:31.275 "zerocopy_threshold": 0 01:15:31.275 } 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "method": "sock_impl_set_options", 01:15:31.275 "params": { 01:15:31.275 "enable_ktls": false, 01:15:31.275 "enable_placement_id": 0, 01:15:31.275 "enable_quickack": false, 01:15:31.275 "enable_recv_pipe": true, 01:15:31.275 "enable_zerocopy_send_client": false, 01:15:31.275 "enable_zerocopy_send_server": true, 01:15:31.275 "impl_name": "posix", 01:15:31.275 "recv_buf_size": 2097152, 01:15:31.275 "send_buf_size": 2097152, 01:15:31.275 "tls_version": 0, 01:15:31.275 "zerocopy_threshold": 0 01:15:31.275 } 01:15:31.275 } 01:15:31.275 ] 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "subsystem": "vmd", 01:15:31.275 "config": [] 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "subsystem": "accel", 01:15:31.275 "config": [ 01:15:31.275 { 01:15:31.275 "method": "accel_set_options", 01:15:31.275 "params": { 01:15:31.275 "buf_count": 2048, 01:15:31.275 "large_cache_size": 16, 01:15:31.275 "sequence_count": 2048, 01:15:31.275 "small_cache_size": 128, 01:15:31.275 "task_count": 2048 01:15:31.275 } 01:15:31.275 } 01:15:31.275 ] 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "subsystem": "bdev", 01:15:31.275 "config": [ 01:15:31.275 { 01:15:31.275 "method": "bdev_set_options", 01:15:31.275 "params": { 01:15:31.275 "bdev_auto_examine": true, 01:15:31.275 "bdev_io_cache_size": 256, 01:15:31.275 "bdev_io_pool_size": 65535, 01:15:31.275 "iobuf_large_cache_size": 16, 01:15:31.275 "iobuf_small_cache_size": 128 01:15:31.275 } 01:15:31.275 }, 01:15:31.275 { 01:15:31.275 "method": "bdev_raid_set_options", 01:15:31.275 "params": { 01:15:31.276 "process_max_bandwidth_mb_sec": 0, 01:15:31.276 "process_window_size_kb": 1024 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "bdev_iscsi_set_options", 01:15:31.276 "params": { 01:15:31.276 "timeout_sec": 30 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "bdev_nvme_set_options", 01:15:31.276 "params": { 01:15:31.276 "action_on_timeout": "none", 01:15:31.276 "allow_accel_sequence": false, 01:15:31.276 "arbitration_burst": 0, 01:15:31.276 "bdev_retry_count": 3, 01:15:31.276 "ctrlr_loss_timeout_sec": 0, 01:15:31.276 "delay_cmd_submit": true, 01:15:31.276 "dhchap_dhgroups": [ 01:15:31.276 "null", 01:15:31.276 "ffdhe2048", 01:15:31.276 "ffdhe3072", 01:15:31.276 "ffdhe4096", 01:15:31.276 "ffdhe6144", 01:15:31.276 "ffdhe8192" 01:15:31.276 ], 01:15:31.276 "dhchap_digests": [ 01:15:31.276 "sha256", 01:15:31.276 "sha384", 01:15:31.276 "sha512" 01:15:31.276 ], 01:15:31.276 "disable_auto_failback": false, 01:15:31.276 "fast_io_fail_timeout_sec": 0, 01:15:31.276 "generate_uuids": false, 01:15:31.276 "high_priority_weight": 0, 01:15:31.276 "io_path_stat": false, 01:15:31.276 "io_queue_requests": 0, 01:15:31.276 "keep_alive_timeout_ms": 10000, 01:15:31.276 "low_priority_weight": 0, 01:15:31.276 "medium_priority_weight": 0, 01:15:31.276 "nvme_adminq_poll_period_us": 10000, 01:15:31.276 "nvme_error_stat": false, 01:15:31.276 "nvme_ioq_poll_period_us": 0, 01:15:31.276 "rdma_cm_event_timeout_ms": 0, 01:15:31.276 "rdma_max_cq_size": 0, 01:15:31.276 "rdma_srq_size": 0, 01:15:31.276 "reconnect_delay_sec": 0, 01:15:31.276 "timeout_admin_us": 0, 01:15:31.276 "timeout_us": 0, 01:15:31.276 "transport_ack_timeout": 0, 01:15:31.276 "transport_retry_count": 4, 01:15:31.276 "transport_tos": 0 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "bdev_nvme_set_hotplug", 01:15:31.276 "params": { 01:15:31.276 "enable": false, 01:15:31.276 "period_us": 100000 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "bdev_malloc_create", 01:15:31.276 "params": { 01:15:31.276 "block_size": 4096, 01:15:31.276 "name": "malloc0", 01:15:31.276 "num_blocks": 8192, 01:15:31.276 "optimal_io_boundary": 0, 01:15:31.276 "physical_block_size": 4096, 01:15:31.276 "uuid": "2a05cb3a-3cc7-4d99-933d-8d7c028bb30b" 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "bdev_wait_for_examine" 01:15:31.276 } 01:15:31.276 ] 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "subsystem": "nbd", 01:15:31.276 "config": [] 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "subsystem": "scheduler", 01:15:31.276 "config": [ 01:15:31.276 { 01:15:31.276 "method": "framework_set_scheduler", 01:15:31.276 "params": { 01:15:31.276 "name": "static" 01:15:31.276 } 01:15:31.276 } 01:15:31.276 ] 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "subsystem": "nvmf", 01:15:31.276 "config": [ 01:15:31.276 { 01:15:31.276 "method": "nvmf_set_config", 01:15:31.276 "params": { 01:15:31.276 "admin_cmd_passthru": { 01:15:31.276 "identify_ctrlr": false 01:15:31.276 }, 01:15:31.276 "discovery_filter": "match_any" 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "nvmf_set_max_subsystems", 01:15:31.276 "params": { 01:15:31.276 "max_subsystems": 1024 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "nvmf_set_crdt", 01:15:31.276 "params": { 01:15:31.276 "crdt1": 0, 01:15:31.276 "crdt2": 0, 01:15:31.276 "crdt3": 0 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "nvmf_create_transport", 01:15:31.276 "params": { 01:15:31.276 "abort_timeout_sec": 1, 01:15:31.276 "ack_timeout": 0, 01:15:31.276 "buf_cache_size": 4294967295, 01:15:31.276 "c2h_success": false, 01:15:31.276 "data_wr_pool_size": 0, 01:15:31.276 "dif_insert_or_strip": false, 01:15:31.276 "in_capsule_data_size": 4096, 01:15:31.276 "io_unit_size": 131072, 01:15:31.276 "max_aq_depth": 128, 01:15:31.276 "max_io_qpairs_per_ctrlr": 127, 01:15:31.276 "max_io_size": 131072, 01:15:31.276 "max_queue_depth": 128, 01:15:31.276 "num_shared_buffers": 511, 01:15:31.276 "sock_priority": 0, 01:15:31.276 "trtype": "TCP", 01:15:31.276 "zcopy": false 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "nvmf_create_subsystem", 01:15:31.276 "params": { 01:15:31.276 " 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:31.276 allow_any_host": false, 01:15:31.276 "ana_reporting": false, 01:15:31.276 "max_cntlid": 65519, 01:15:31.276 "max_namespaces": 32, 01:15:31.276 "min_cntlid": 1, 01:15:31.276 "model_number": "SPDK bdev Controller", 01:15:31.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:31.276 "serial_number": "00000000000000000000" 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "nvmf_subsystem_add_host", 01:15:31.276 "params": { 01:15:31.276 "host": "nqn.2016-06.io.spdk:host1", 01:15:31.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:31.276 "psk": "key0" 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "nvmf_subsystem_add_ns", 01:15:31.276 "params": { 01:15:31.276 "namespace": { 01:15:31.276 "bdev_name": "malloc0", 01:15:31.276 "nguid": "2A05CB3A3CC74D99933D8D7C028BB30B", 01:15:31.276 "no_auto_visible": false, 01:15:31.276 "nsid": 1, 01:15:31.276 "uuid": "2a05cb3a-3cc7-4d99-933d-8d7c028bb30b" 01:15:31.276 }, 01:15:31.276 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:15:31.276 } 01:15:31.276 }, 01:15:31.276 { 01:15:31.276 "method": "nvmf_subsystem_add_listener", 01:15:31.276 "params": { 01:15:31.276 "listen_address": { 01:15:31.276 "adrfam": "IPv4", 01:15:31.276 "traddr": "10.0.0.2", 01:15:31.276 "trsvcid": "4420", 01:15:31.276 "trtype": "TCP" 01:15:31.276 }, 01:15:31.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:15:31.276 "secure_channel": false, 01:15:31.276 "sock_impl": "ssl" 01:15:31.276 } 01:15:31.276 } 01:15:31.276 ] 01:15:31.276 } 01:15:31.276 ] 01:15:31.276 }' 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101317 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101317 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101317 ']' 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:31.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:31.276 11:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:31.276 [2024-07-22 11:12:36.364761] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:31.276 [2024-07-22 11:12:36.364846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:31.534 [2024-07-22 11:12:36.507824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:31.534 [2024-07-22 11:12:36.604889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:31.534 [2024-07-22 11:12:36.604954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:31.534 [2024-07-22 11:12:36.604980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:31.534 [2024-07-22 11:12:36.604991] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:31.534 [2024-07-22 11:12:36.605001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:31.534 [2024-07-22 11:12:36.605107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:31.792 [2024-07-22 11:12:36.862904] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:31.792 [2024-07-22 11:12:36.894865] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:31.792 [2024-07-22 11:12:36.895082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=101356 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 101356 /var/tmp/bdevperf.sock 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101356 ']' 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:32.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 01:15:32.358 11:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 01:15:32.358 "subsystems": [ 01:15:32.358 { 01:15:32.358 "subsystem": "keyring", 01:15:32.358 "config": [ 01:15:32.358 { 01:15:32.358 "method": "keyring_file_add_key", 01:15:32.358 "params": { 01:15:32.358 "name": "key0", 01:15:32.358 "path": "/tmp/tmp.NMZuEDqCJg" 01:15:32.358 } 01:15:32.358 } 01:15:32.358 ] 01:15:32.358 }, 01:15:32.358 { 01:15:32.358 "subsystem": "iobuf", 01:15:32.358 "config": [ 01:15:32.358 { 01:15:32.358 "method": "iobuf_set_options", 01:15:32.358 "params": { 01:15:32.358 "large_bufsize": 135168, 01:15:32.358 "large_pool_count": 1024, 01:15:32.358 "small_bufsize": 8192, 01:15:32.358 "small_pool_count": 8192 01:15:32.358 } 01:15:32.358 } 01:15:32.358 ] 01:15:32.358 }, 01:15:32.358 { 01:15:32.358 "subsystem": "sock", 01:15:32.358 "config": [ 01:15:32.358 { 01:15:32.358 "method": "sock_set_default_impl", 01:15:32.358 "params": { 01:15:32.358 "impl_name": "posix" 01:15:32.358 } 01:15:32.358 }, 01:15:32.358 { 01:15:32.358 "method": "sock_impl_set_options", 01:15:32.358 "params": { 01:15:32.358 "enable_ktls": false, 01:15:32.358 "enable_placement_id": 0, 01:15:32.358 "enable_quickack": false, 01:15:32.358 "enable_recv_pipe": true, 01:15:32.358 "enable_zerocopy_send_client": false, 01:15:32.358 "enable_zerocopy_send_server": true, 01:15:32.358 "impl_name": "ssl", 01:15:32.358 "recv_buf_size": 4096, 01:15:32.359 "send_buf_size": 4096, 01:15:32.359 "tls_version": 0, 01:15:32.359 "zerocopy_threshold": 0 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "sock_impl_set_options", 01:15:32.359 "params": { 01:15:32.359 "enable_ktls": false, 01:15:32.359 "enable_placement_id": 0, 01:15:32.359 "enable_quickack": false, 01:15:32.359 "enable_recv_pipe": true, 01:15:32.359 "enable_zerocopy_send_client": false, 01:15:32.359 "enable_zerocopy_send_server": true, 01:15:32.359 "impl_name": "posix", 01:15:32.359 "recv_buf_size": 2097152, 01:15:32.359 "send_buf_size": 2097152, 01:15:32.359 "tls_version": 0, 01:15:32.359 "zerocopy_threshold": 0 01:15:32.359 } 01:15:32.359 } 01:15:32.359 ] 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "subsystem": "vmd", 01:15:32.359 "config": [] 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "subsystem": "accel", 01:15:32.359 "config": [ 01:15:32.359 { 01:15:32.359 "method": "accel_set_options", 01:15:32.359 "params": { 01:15:32.359 "buf_count": 2048, 01:15:32.359 "large_cache_size": 16, 01:15:32.359 "sequence_count": 2048, 01:15:32.359 "small_cache_size": 128, 01:15:32.359 "task_count": 2048 01:15:32.359 } 01:15:32.359 } 01:15:32.359 ] 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "subsystem": "bdev", 01:15:32.359 "config": [ 01:15:32.359 { 01:15:32.359 "method": "bdev_set_options", 01:15:32.359 "params": { 01:15:32.359 "bdev_auto_examine": true, 01:15:32.359 "bdev_io_cache_size": 256, 01:15:32.359 "bdev_io_pool_size": 65535, 01:15:32.359 "iobuf_large_cache_size": 16, 01:15:32.359 "iobuf_small_cache_size": 128 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "bdev_raid_set_options", 01:15:32.359 "params": { 01:15:32.359 "process_max_bandwidth_mb_sec": 0, 01:15:32.359 "process_window_size_kb": 1024 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "bdev_iscsi_set_options", 01:15:32.359 "params": { 01:15:32.359 "timeout_sec": 30 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "bdev_nvme_set_options", 01:15:32.359 "params": { 01:15:32.359 "action_on_timeout": "none", 01:15:32.359 "allow_accel_sequence": false, 01:15:32.359 "arbitration_burst": 0, 01:15:32.359 "bdev_retry_count": 3, 01:15:32.359 "ctrlr_loss_timeout_sec": 0, 01:15:32.359 "delay_cmd_submit": true, 01:15:32.359 "dhchap_dhgroups": [ 01:15:32.359 "null", 01:15:32.359 "ffdhe2048", 01:15:32.359 "ffdhe3072", 01:15:32.359 "ffdhe4096", 01:15:32.359 "ffdhe6144", 01:15:32.359 "ffdhe8192" 01:15:32.359 ], 01:15:32.359 "dhchap_digests": [ 01:15:32.359 "sha256", 01:15:32.359 "sha384", 01:15:32.359 "sha512" 01:15:32.359 ], 01:15:32.359 "disable_auto_failback": false, 01:15:32.359 "fast_io_fail_timeout_sec": 0, 01:15:32.359 "generate_uuids": false, 01:15:32.359 "high_priority_weight": 0, 01:15:32.359 "io_path_stat": false, 01:15:32.359 "io_queue_requests": 512, 01:15:32.359 "keep_alive_timeout_ms": 10000, 01:15:32.359 "low_priority_weight": 0, 01:15:32.359 "medium_priority_weight": 0, 01:15:32.359 "nvme_adminq_poll_period_us": 10000, 01:15:32.359 "nvme_error_stat": false, 01:15:32.359 "nvme_ioq_poll_period_us": 0, 01:15:32.359 "rdma_cm_event_timeout_ms": 0, 01:15:32.359 "rdma_max_cq_size": 0, 01:15:32.359 "rdma_srq_size": 0, 01:15:32.359 "reconnect_delay_sec": 0, 01:15:32.359 "timeout_admin_us": 0, 01:15:32.359 "timeout_us": 0, 01:15:32.359 "transport_ack_timeout": 0, 01:15:32.359 "transport_retry_count": 4, 01:15:32.359 "transport_tos": 0 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "bdev_nvme_attach_controller", 01:15:32.359 "params": { 01:15:32.359 "adrfam": "IPv4", 01:15:32.359 "ctrlr_loss_timeout_sec": 0, 01:15:32.359 "ddgst": false, 01:15:32.359 "fast_io_fail_timeout_sec": 0, 01:15:32.359 "hdgst": false, 01:15:32.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:15:32.359 "name": "nvme0", 01:15:32.359 "prchk_guard": false, 01:15:32.359 "prchk_reftag": false, 01:15:32.359 "psk": "key0", 01:15:32.359 "reconnect_delay_sec": 0, 01:15:32.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:15:32.359 "traddr": "10.0.0.2", 01:15:32.359 "trsvcid": "4420", 01:15:32.359 "trtype": "TCP" 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "bdev_nvme_set_hotplug", 01:15:32.359 "params": { 01:15:32.359 "enable": false, 01:15:32.359 "period_us": 100000 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "bdev_enable_histogram", 01:15:32.359 "params": { 01:15:32.359 "enable": true, 01:15:32.359 "name": "nvme0n1" 01:15:32.359 } 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "method": "bdev_wait_for_examine" 01:15:32.359 } 01:15:32.359 ] 01:15:32.359 }, 01:15:32.359 { 01:15:32.359 "subsystem": "nbd", 01:15:32.359 "config": [] 01:15:32.359 } 01:15:32.359 ] 01:15:32.359 }' 01:15:32.359 [2024-07-22 11:12:37.442152] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:32.359 [2024-07-22 11:12:37.442243] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101356 ] 01:15:32.617 [2024-07-22 11:12:37.583231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:32.617 [2024-07-22 11:12:37.648615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:32.617 [2024-07-22 11:12:37.810979] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:33.182 11:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:33.182 11:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 01:15:33.182 11:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:15:33.182 11:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 01:15:33.439 11:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:15:33.439 11:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:15:33.439 Running I/O for 1 seconds... 01:15:34.809 01:15:34.809 Latency(us) 01:15:34.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:34.809 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:15:34.809 Verification LBA range: start 0x0 length 0x2000 01:15:34.809 nvme0n1 : 1.01 4653.39 18.18 0.00 0.00 27286.80 4706.68 22758.87 01:15:34.809 =================================================================================================================== 01:15:34.809 Total : 4653.39 18.18 0.00 0.00 27286.80 4706.68 22758.87 01:15:34.809 0 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:15:34.809 nvmf_trace.0 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 101356 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101356 ']' 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101356 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101356 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:34.809 killing process with pid 101356 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101356' 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101356 01:15:34.809 Received shutdown signal, test time was about 1.000000 seconds 01:15:34.809 01:15:34.809 Latency(us) 01:15:34.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:34.809 =================================================================================================================== 01:15:34.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101356 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 01:15:34.809 11:12:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:15:34.809 rmmod nvme_tcp 01:15:34.809 rmmod nvme_fabrics 01:15:34.809 rmmod nvme_keyring 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 101317 ']' 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 101317 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101317 ']' 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101317 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:34.809 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101317 01:15:35.067 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:35.067 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:35.067 killing process with pid 101317 01:15:35.067 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101317' 01:15:35.067 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101317 01:15:35.067 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101317 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.JT9ZbINlTu /tmp/tmp.3x3eKd9fNx /tmp/tmp.NMZuEDqCJg 01:15:35.325 01:15:35.325 real 1m21.803s 01:15:35.325 user 2m1.837s 01:15:35.325 sys 0m31.089s 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 01:15:35.325 ************************************ 01:15:35.325 11:12:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:15:35.325 END TEST nvmf_tls 01:15:35.325 ************************************ 01:15:35.325 11:12:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:15:35.325 11:12:40 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:15:35.325 11:12:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:15:35.325 11:12:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:15:35.325 11:12:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:35.325 ************************************ 01:15:35.325 START TEST nvmf_fips 01:15:35.325 ************************************ 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:15:35.325 * Looking for test storage... 01:15:35.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 01:15:35.325 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 01:15:35.584 Error setting digest 01:15:35.584 00420BDB597F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 01:15:35.584 00420BDB597F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:15:35.584 Cannot find device "nvmf_tgt_br" 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:15:35.584 Cannot find device "nvmf_tgt_br2" 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:15:35.584 Cannot find device "nvmf_tgt_br" 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:15:35.584 Cannot find device "nvmf_tgt_br2" 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 01:15:35.584 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:15:35.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:15:35.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:15:35.843 11:12:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:15:35.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:15:35.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 01:15:35.843 01:15:35.843 --- 10.0.0.2 ping statistics --- 01:15:35.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:35.843 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:15:35.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:15:35.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 01:15:35.843 01:15:35.843 --- 10.0.0.3 ping statistics --- 01:15:35.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:35.843 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:15:35.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:15:35.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:15:35.843 01:15:35.843 --- 10.0.0.1 ping statistics --- 01:15:35.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:35.843 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:35.843 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=101640 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 101640 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 101640 ']' 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:36.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:36.101 11:12:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:15:36.101 [2024-07-22 11:12:41.139260] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:36.101 [2024-07-22 11:12:41.139352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:36.101 [2024-07-22 11:12:41.282481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:36.359 [2024-07-22 11:12:41.354830] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:36.359 [2024-07-22 11:12:41.354890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:36.359 [2024-07-22 11:12:41.354904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:36.359 [2024-07-22 11:12:41.354916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:36.359 [2024-07-22 11:12:41.354926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:36.359 [2024-07-22 11:12:41.354973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:36.925 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:36.925 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 01:15:36.925 11:12:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:36.925 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:36.925 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:15:37.183 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:37.442 [2024-07-22 11:12:42.419750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:37.442 [2024-07-22 11:12:42.435687] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:37.442 [2024-07-22 11:12:42.435879] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:37.442 [2024-07-22 11:12:42.469644] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:15:37.442 malloc0 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=101696 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 101696 /var/tmp/bdevperf.sock 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 101696 ']' 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:15:37.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:37.442 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:15:37.442 [2024-07-22 11:12:42.548129] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:37.442 [2024-07-22 11:12:42.548185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101696 ] 01:15:37.700 [2024-07-22 11:12:42.684062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:37.700 [2024-07-22 11:12:42.771186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:15:37.958 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:37.958 11:12:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 01:15:37.958 11:12:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:15:37.958 [2024-07-22 11:12:43.147234] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:37.958 [2024-07-22 11:12:43.147388] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:15:38.214 TLSTESTn1 01:15:38.214 11:12:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:15:38.214 Running I/O for 10 seconds... 01:15:48.181 01:15:48.181 Latency(us) 01:15:48.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:48.181 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:15:48.181 Verification LBA range: start 0x0 length 0x2000 01:15:48.181 TLSTESTn1 : 10.02 4649.82 18.16 0.00 0.00 27470.13 7238.75 29074.15 01:15:48.181 =================================================================================================================== 01:15:48.181 Total : 4649.82 18.16 0.00 0.00 27470.13 7238.75 29074.15 01:15:48.181 0 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 01:15:48.181 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:15:48.181 nvmf_trace.0 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 101696 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 101696 ']' 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 101696 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101696 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:15:48.439 killing process with pid 101696 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101696' 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 101696 01:15:48.439 Received shutdown signal, test time was about 10.000000 seconds 01:15:48.439 01:15:48.439 Latency(us) 01:15:48.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:48.439 =================================================================================================================== 01:15:48.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:48.439 [2024-07-22 11:12:53.494702] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:15:48.439 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 101696 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:15:48.697 rmmod nvme_tcp 01:15:48.697 rmmod nvme_fabrics 01:15:48.697 rmmod nvme_keyring 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 101640 ']' 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 101640 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 101640 ']' 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 101640 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101640 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:15:48.697 killing process with pid 101640 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101640' 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 101640 01:15:48.697 [2024-07-22 11:12:53.814155] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:15:48.697 11:12:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 101640 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 01:15:48.955 ************************************ 01:15:48.955 END TEST nvmf_fips 01:15:48.955 ************************************ 01:15:48.955 01:15:48.955 real 0m13.741s 01:15:48.955 user 0m17.467s 01:15:48.955 sys 0m6.131s 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 01:15:48.955 11:12:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:15:49.214 11:12:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:15:49.214 11:12:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 01:15:49.214 11:12:54 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 01:15:49.214 11:12:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:15:49.214 11:12:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:15:49.214 11:12:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:49.214 ************************************ 01:15:49.214 START TEST nvmf_fuzz 01:15:49.214 ************************************ 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 01:15:49.214 * Looking for test storage... 01:15:49.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:49.214 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:15:49.215 Cannot find device "nvmf_tgt_br" 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:15:49.215 Cannot find device "nvmf_tgt_br2" 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:15:49.215 Cannot find device "nvmf_tgt_br" 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:15:49.215 Cannot find device "nvmf_tgt_br2" 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 01:15:49.215 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:15:49.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:15:49.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:15:49.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:15:49.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 01:15:49.474 01:15:49.474 --- 10.0.0.2 ping statistics --- 01:15:49.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:49.474 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:15:49.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:15:49.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 01:15:49.474 01:15:49.474 --- 10.0.0.3 ping statistics --- 01:15:49.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:49.474 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:15:49.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:15:49.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:15:49.474 01:15:49.474 --- 10.0.0.1 ping statistics --- 01:15:49.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:49.474 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:15:49.474 11:12:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=102023 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 102023 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 102023 ']' 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:49.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:49.732 11:12:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:50.678 Malloc0 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 01:15:50.678 11:12:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 01:15:50.936 Shutting down the fuzz application 01:15:50.936 11:12:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 01:15:51.501 Shutting down the fuzz application 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:15:51.501 rmmod nvme_tcp 01:15:51.501 rmmod nvme_fabrics 01:15:51.501 rmmod nvme_keyring 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 102023 ']' 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 102023 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 102023 ']' 01:15:51.501 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 102023 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102023 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:15:51.502 killing process with pid 102023 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102023' 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 102023 01:15:51.502 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 102023 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 01:15:51.760 01:15:51.760 real 0m2.723s 01:15:51.760 user 0m2.684s 01:15:51.760 sys 0m0.730s 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 01:15:51.760 11:12:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 01:15:51.760 ************************************ 01:15:51.760 END TEST nvmf_fuzz 01:15:51.760 ************************************ 01:15:51.760 11:12:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:15:51.760 11:12:56 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 01:15:51.760 11:12:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:15:51.760 11:12:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:15:51.760 11:12:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:52.018 ************************************ 01:15:52.018 START TEST nvmf_multiconnection 01:15:52.018 ************************************ 01:15:52.018 11:12:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 01:15:52.018 * Looking for test storage... 01:15:52.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:15:52.018 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:15:52.019 Cannot find device "nvmf_tgt_br" 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:15:52.019 Cannot find device "nvmf_tgt_br2" 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:15:52.019 Cannot find device "nvmf_tgt_br" 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:15:52.019 Cannot find device "nvmf_tgt_br2" 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:15:52.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:15:52.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 01:15:52.019 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:15:52.277 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:15:52.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:15:52.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 01:15:52.278 01:15:52.278 --- 10.0.0.2 ping statistics --- 01:15:52.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:52.278 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:15:52.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:15:52.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 01:15:52.278 01:15:52.278 --- 10.0.0.3 ping statistics --- 01:15:52.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:52.278 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:15:52.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:15:52.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 01:15:52.278 01:15:52.278 --- 10.0.0.1 ping statistics --- 01:15:52.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:15:52.278 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=102231 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 102231 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 102231 ']' 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:52.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 01:15:52.278 11:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:52.535 [2024-07-22 11:12:57.508200] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:15:52.535 [2024-07-22 11:12:57.508278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:15:52.535 [2024-07-22 11:12:57.650639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:15:52.535 [2024-07-22 11:12:57.727249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:15:52.535 [2024-07-22 11:12:57.727651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:15:52.535 [2024-07-22 11:12:57.727802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:52.535 [2024-07-22 11:12:57.727933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:52.535 [2024-07-22 11:12:57.728016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:15:52.535 [2024-07-22 11:12:57.728181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:15:52.535 [2024-07-22 11:12:57.728673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:15:52.535 [2024-07-22 11:12:57.728862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:15:52.535 [2024-07-22 11:12:57.729050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.469 [2024-07-22 11:12:58.569672] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.469 Malloc1 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.469 [2024-07-22 11:12:58.650449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.469 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 Malloc2 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 Malloc3 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 Malloc4 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 Malloc5 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 Malloc6 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.727 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 Malloc7 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 Malloc8 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 Malloc9 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 Malloc10 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:53.985 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:54.243 Malloc11 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:15:54.243 11:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:15:56.774 11:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:15:58.734 11:13:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:00.664 11:13:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 01:16:00.942 11:13:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 01:16:00.942 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:00.942 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:00.942 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:00.943 11:13:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:02.865 11:13:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:02.865 11:13:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:02.865 11:13:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 01:16:02.865 11:13:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:02.865 11:13:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:02.865 11:13:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:02.865 11:13:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:02.865 11:13:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 01:16:03.123 11:13:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 01:16:03.123 11:13:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:03.123 11:13:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:03.123 11:13:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:03.123 11:13:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:05.019 11:13:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 01:16:05.277 11:13:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 01:16:05.278 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:05.278 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:05.278 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:05.278 11:13:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:07.179 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:07.179 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:07.179 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:07.438 11:13:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:09.967 11:13:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:11.886 11:13:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:13.783 11:13:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:13.783 11:13:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:13.783 11:13:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 01:16:14.040 11:13:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:14.040 11:13:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:14.040 11:13:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:14.040 11:13:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:14.040 11:13:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 01:16:14.040 11:13:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 01:16:14.040 11:13:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:14.040 11:13:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:14.040 11:13:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:14.040 11:13:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:16.570 11:13:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 01:16:18.472 11:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:18.472 11:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 01:16:18.472 11:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:18.472 11:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:18.472 11:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:18.472 11:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 01:16:18.472 11:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 01:16:18.472 [global] 01:16:18.472 thread=1 01:16:18.472 invalidate=1 01:16:18.472 rw=read 01:16:18.472 time_based=1 01:16:18.472 runtime=10 01:16:18.472 ioengine=libaio 01:16:18.472 direct=1 01:16:18.472 bs=262144 01:16:18.472 iodepth=64 01:16:18.472 norandommap=1 01:16:18.472 numjobs=1 01:16:18.472 01:16:18.472 [job0] 01:16:18.472 filename=/dev/nvme0n1 01:16:18.472 [job1] 01:16:18.472 filename=/dev/nvme10n1 01:16:18.472 [job2] 01:16:18.472 filename=/dev/nvme1n1 01:16:18.472 [job3] 01:16:18.472 filename=/dev/nvme2n1 01:16:18.472 [job4] 01:16:18.472 filename=/dev/nvme3n1 01:16:18.472 [job5] 01:16:18.472 filename=/dev/nvme4n1 01:16:18.472 [job6] 01:16:18.472 filename=/dev/nvme5n1 01:16:18.472 [job7] 01:16:18.472 filename=/dev/nvme6n1 01:16:18.472 [job8] 01:16:18.472 filename=/dev/nvme7n1 01:16:18.472 [job9] 01:16:18.472 filename=/dev/nvme8n1 01:16:18.472 [job10] 01:16:18.472 filename=/dev/nvme9n1 01:16:18.472 Could not set queue depth (nvme0n1) 01:16:18.472 Could not set queue depth (nvme10n1) 01:16:18.472 Could not set queue depth (nvme1n1) 01:16:18.472 Could not set queue depth (nvme2n1) 01:16:18.472 Could not set queue depth (nvme3n1) 01:16:18.472 Could not set queue depth (nvme4n1) 01:16:18.472 Could not set queue depth (nvme5n1) 01:16:18.472 Could not set queue depth (nvme6n1) 01:16:18.472 Could not set queue depth (nvme7n1) 01:16:18.472 Could not set queue depth (nvme8n1) 01:16:18.472 Could not set queue depth (nvme9n1) 01:16:18.730 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:18.730 fio-3.35 01:16:18.730 Starting 11 threads 01:16:30.931 01:16:30.931 job0: (groupid=0, jobs=1): err= 0: pid=102713: Mon Jul 22 11:13:34 2024 01:16:30.931 read: IOPS=683, BW=171MiB/s (179MB/s)(1717MiB/10051msec) 01:16:30.931 slat (usec): min=20, max=84740, avg=1412.31, stdev=5303.84 01:16:30.931 clat (msec): min=32, max=190, avg=92.06, stdev=32.58 01:16:30.931 lat (msec): min=32, max=217, avg=93.48, stdev=33.31 01:16:30.931 clat percentiles (msec): 01:16:30.931 | 1.00th=[ 44], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 63], 01:16:30.931 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 104], 01:16:30.931 | 70.00th=[ 112], 80.00th=[ 123], 90.00th=[ 138], 95.00th=[ 155], 01:16:30.931 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 182], 99.95th=[ 190], 01:16:30.931 | 99.99th=[ 190] 01:16:30.931 bw ( KiB/s): min=96768, max=264686, per=11.21%, avg=174164.05, stdev=58412.82, samples=20 01:16:30.931 iops : min= 378, max= 1033, avg=680.20, stdev=228.09, samples=20 01:16:30.931 lat (msec) : 50=3.33%, 100=54.63%, 250=42.04% 01:16:30.931 cpu : usr=0.26%, sys=2.41%, ctx=1410, majf=0, minf=4097 01:16:30.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 01:16:30.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.931 issued rwts: total=6868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.931 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.931 job1: (groupid=0, jobs=1): err= 0: pid=102714: Mon Jul 22 11:13:34 2024 01:16:30.931 read: IOPS=390, BW=97.6MiB/s (102MB/s)(995MiB/10188msec) 01:16:30.931 slat (usec): min=22, max=91283, avg=2489.42, stdev=8153.61 01:16:30.931 clat (msec): min=19, max=370, avg=161.05, stdev=51.68 01:16:30.931 lat (msec): min=19, max=370, avg=163.54, stdev=52.99 01:16:30.931 clat percentiles (msec): 01:16:30.931 | 1.00th=[ 40], 5.00th=[ 66], 10.00th=[ 77], 20.00th=[ 136], 01:16:30.931 | 30.00th=[ 150], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 182], 01:16:30.931 | 70.00th=[ 194], 80.00th=[ 205], 90.00th=[ 218], 95.00th=[ 224], 01:16:30.931 | 99.00th=[ 245], 99.50th=[ 300], 99.90th=[ 363], 99.95th=[ 372], 01:16:30.931 | 99.99th=[ 372] 01:16:30.931 bw ( KiB/s): min=69493, max=200192, per=6.45%, avg=100190.50, stdev=35310.21, samples=20 01:16:30.931 iops : min= 271, max= 782, avg=391.30, stdev=137.97, samples=20 01:16:30.931 lat (msec) : 20=0.08%, 50=1.13%, 100=17.45%, 250=80.64%, 500=0.70% 01:16:30.931 cpu : usr=0.13%, sys=1.45%, ctx=808, majf=0, minf=4097 01:16:30.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 01:16:30.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.931 issued rwts: total=3978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.931 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.931 job2: (groupid=0, jobs=1): err= 0: pid=102715: Mon Jul 22 11:13:34 2024 01:16:30.931 read: IOPS=460, BW=115MiB/s (121MB/s)(1168MiB/10140msec) 01:16:30.931 slat (usec): min=22, max=87584, avg=2144.50, stdev=8254.67 01:16:30.931 clat (msec): min=10, max=304, avg=136.51, stdev=66.02 01:16:30.931 lat (msec): min=10, max=316, avg=138.65, stdev=67.42 01:16:30.931 clat percentiles (msec): 01:16:30.931 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 43], 01:16:30.931 | 30.00th=[ 131], 40.00th=[ 146], 50.00th=[ 155], 60.00th=[ 167], 01:16:30.931 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 205], 95.00th=[ 215], 01:16:30.931 | 99.00th=[ 247], 99.50th=[ 268], 99.90th=[ 305], 99.95th=[ 305], 01:16:30.931 | 99.99th=[ 305] 01:16:30.931 bw ( KiB/s): min=68232, max=414720, per=7.59%, avg=117943.90, stdev=82967.40, samples=20 01:16:30.931 iops : min= 266, max= 1620, avg=460.50, stdev=324.18, samples=20 01:16:30.931 lat (msec) : 20=1.03%, 50=24.70%, 100=2.16%, 250=71.28%, 500=0.83% 01:16:30.931 cpu : usr=0.16%, sys=1.52%, ctx=1011, majf=0, minf=4097 01:16:30.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 01:16:30.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.931 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.931 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.931 job3: (groupid=0, jobs=1): err= 0: pid=102716: Mon Jul 22 11:13:34 2024 01:16:30.931 read: IOPS=961, BW=240MiB/s (252MB/s)(2414MiB/10044msec) 01:16:30.931 slat (usec): min=17, max=85795, avg=1029.04, stdev=4115.60 01:16:30.931 clat (msec): min=19, max=171, avg=65.42, stdev=23.50 01:16:30.931 lat (msec): min=19, max=239, avg=66.45, stdev=24.01 01:16:30.931 clat percentiles (msec): 01:16:30.931 | 1.00th=[ 23], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 44], 01:16:30.931 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 71], 01:16:30.931 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 94], 01:16:30.931 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 171], 01:16:30.931 | 99.99th=[ 171] 01:16:30.931 bw ( KiB/s): min=99328, max=424785, per=15.80%, avg=245551.40, stdev=80249.37, samples=20 01:16:30.932 iops : min= 388, max= 1659, avg=958.95, stdev=313.43, samples=20 01:16:30.932 lat (msec) : 20=0.41%, 50=25.06%, 100=70.45%, 250=4.07% 01:16:30.932 cpu : usr=0.27%, sys=2.98%, ctx=1589, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=9655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 job4: (groupid=0, jobs=1): err= 0: pid=102717: Mon Jul 22 11:13:34 2024 01:16:30.932 read: IOPS=521, BW=130MiB/s (137MB/s)(1322MiB/10144msec) 01:16:30.932 slat (usec): min=21, max=129436, avg=1883.11, stdev=7883.41 01:16:30.932 clat (msec): min=45, max=308, avg=120.71, stdev=57.40 01:16:30.932 lat (msec): min=45, max=308, avg=122.59, stdev=58.62 01:16:30.932 clat percentiles (msec): 01:16:30.932 | 1.00th=[ 56], 5.00th=[ 64], 10.00th=[ 67], 20.00th=[ 72], 01:16:30.932 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 109], 01:16:30.932 | 70.00th=[ 171], 80.00th=[ 192], 90.00th=[ 203], 95.00th=[ 213], 01:16:30.932 | 99.00th=[ 239], 99.50th=[ 262], 99.90th=[ 292], 99.95th=[ 292], 01:16:30.932 | 99.99th=[ 309] 01:16:30.932 bw ( KiB/s): min=69771, max=221696, per=8.60%, avg=133662.35, stdev=61125.30, samples=20 01:16:30.932 iops : min= 272, max= 866, avg=522.00, stdev=238.84, samples=20 01:16:30.932 lat (msec) : 50=0.21%, 100=58.57%, 250=40.35%, 500=0.87% 01:16:30.932 cpu : usr=0.16%, sys=1.78%, ctx=981, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 job5: (groupid=0, jobs=1): err= 0: pid=102718: Mon Jul 22 11:13:34 2024 01:16:30.932 read: IOPS=366, BW=91.6MiB/s (96.1MB/s)(930MiB/10147msec) 01:16:30.932 slat (usec): min=22, max=108305, avg=2650.34, stdev=8856.58 01:16:30.932 clat (msec): min=27, max=323, avg=171.68, stdev=31.39 01:16:30.932 lat (msec): min=27, max=323, avg=174.33, stdev=32.78 01:16:30.932 clat percentiles (msec): 01:16:30.932 | 1.00th=[ 75], 5.00th=[ 133], 10.00th=[ 140], 20.00th=[ 148], 01:16:30.932 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 178], 01:16:30.932 | 70.00th=[ 188], 80.00th=[ 199], 90.00th=[ 213], 95.00th=[ 222], 01:16:30.932 | 99.00th=[ 234], 99.50th=[ 253], 99.90th=[ 305], 99.95th=[ 326], 01:16:30.932 | 99.99th=[ 326] 01:16:30.932 bw ( KiB/s): min=71168, max=116224, per=6.02%, avg=93508.20, stdev=13758.43, samples=20 01:16:30.932 iops : min= 278, max= 454, avg=365.20, stdev=53.80, samples=20 01:16:30.932 lat (msec) : 50=0.13%, 100=1.91%, 250=97.44%, 500=0.51% 01:16:30.932 cpu : usr=0.19%, sys=1.34%, ctx=825, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=3718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 job6: (groupid=0, jobs=1): err= 0: pid=102719: Mon Jul 22 11:13:34 2024 01:16:30.932 read: IOPS=366, BW=91.7MiB/s (96.1MB/s)(929MiB/10137msec) 01:16:30.932 slat (usec): min=22, max=104608, avg=2663.26, stdev=9446.04 01:16:30.932 clat (msec): min=24, max=312, avg=171.61, stdev=34.78 01:16:30.932 lat (msec): min=24, max=312, avg=174.27, stdev=36.29 01:16:30.932 clat percentiles (msec): 01:16:30.932 | 1.00th=[ 64], 5.00th=[ 128], 10.00th=[ 138], 20.00th=[ 146], 01:16:30.932 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 182], 01:16:30.932 | 70.00th=[ 190], 80.00th=[ 203], 90.00th=[ 211], 95.00th=[ 224], 01:16:30.932 | 99.00th=[ 262], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 01:16:30.932 | 99.99th=[ 313] 01:16:30.932 bw ( KiB/s): min=76288, max=130560, per=6.02%, avg=93529.65, stdev=15824.49, samples=20 01:16:30.932 iops : min= 298, max= 510, avg=365.05, stdev=61.91, samples=20 01:16:30.932 lat (msec) : 50=0.56%, 100=1.94%, 250=96.37%, 500=1.13% 01:16:30.932 cpu : usr=0.15%, sys=1.19%, ctx=800, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=3717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 job7: (groupid=0, jobs=1): err= 0: pid=102720: Mon Jul 22 11:13:34 2024 01:16:30.932 read: IOPS=646, BW=162MiB/s (170MB/s)(1641MiB/10147msec) 01:16:30.932 slat (usec): min=21, max=144143, avg=1509.14, stdev=6393.94 01:16:30.932 clat (msec): min=23, max=358, avg=97.25, stdev=62.11 01:16:30.932 lat (msec): min=23, max=358, avg=98.76, stdev=63.21 01:16:30.932 clat percentiles (msec): 01:16:30.932 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 46], 01:16:30.932 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 83], 01:16:30.932 | 70.00th=[ 89], 80.00th=[ 176], 90.00th=[ 207], 95.00th=[ 218], 01:16:30.932 | 99.00th=[ 247], 99.50th=[ 266], 99.90th=[ 326], 99.95th=[ 342], 01:16:30.932 | 99.99th=[ 359] 01:16:30.932 bw ( KiB/s): min=69120, max=424960, per=10.71%, avg=166378.55, stdev=98484.81, samples=20 01:16:30.932 iops : min= 270, max= 1660, avg=649.85, stdev=384.75, samples=20 01:16:30.932 lat (msec) : 50=24.07%, 100=50.18%, 250=25.14%, 500=0.61% 01:16:30.932 cpu : usr=0.22%, sys=2.12%, ctx=1213, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=6564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 job8: (groupid=0, jobs=1): err= 0: pid=102721: Mon Jul 22 11:13:34 2024 01:16:30.932 read: IOPS=579, BW=145MiB/s (152MB/s)(1460MiB/10076msec) 01:16:30.932 slat (usec): min=15, max=57632, avg=1712.14, stdev=5892.38 01:16:30.932 clat (msec): min=22, max=192, avg=108.45, stdev=24.92 01:16:30.932 lat (msec): min=23, max=218, avg=110.16, stdev=25.70 01:16:30.932 clat percentiles (msec): 01:16:30.932 | 1.00th=[ 34], 5.00th=[ 66], 10.00th=[ 78], 20.00th=[ 95], 01:16:30.932 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 113], 01:16:30.932 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 136], 95.00th=[ 157], 01:16:30.932 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 178], 99.95th=[ 190], 01:16:30.932 | 99.99th=[ 192] 01:16:30.932 bw ( KiB/s): min=100352, max=219062, per=9.51%, avg=147829.15, stdev=28643.93, samples=20 01:16:30.932 iops : min= 392, max= 855, avg=577.25, stdev=111.81, samples=20 01:16:30.932 lat (msec) : 50=2.62%, 100=24.93%, 250=72.45% 01:16:30.932 cpu : usr=0.24%, sys=2.08%, ctx=1090, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 job9: (groupid=0, jobs=1): err= 0: pid=102722: Mon Jul 22 11:13:34 2024 01:16:30.932 read: IOPS=629, BW=157MiB/s (165MB/s)(1586MiB/10077msec) 01:16:30.932 slat (usec): min=15, max=70517, avg=1560.81, stdev=5611.93 01:16:30.932 clat (msec): min=17, max=210, avg=99.92, stdev=29.78 01:16:30.932 lat (msec): min=17, max=229, avg=101.48, stdev=30.48 01:16:30.932 clat percentiles (msec): 01:16:30.932 | 1.00th=[ 52], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 74], 01:16:30.932 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 105], 01:16:30.932 | 70.00th=[ 114], 80.00th=[ 126], 90.00th=[ 144], 95.00th=[ 159], 01:16:30.932 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 201], 99.95th=[ 201], 01:16:30.932 | 99.99th=[ 211] 01:16:30.932 bw ( KiB/s): min=99328, max=225792, per=10.34%, avg=160653.35, stdev=41489.99, samples=20 01:16:30.932 iops : min= 388, max= 882, avg=627.40, stdev=162.12, samples=20 01:16:30.932 lat (msec) : 20=0.08%, 50=0.74%, 100=55.64%, 250=43.54% 01:16:30.932 cpu : usr=0.25%, sys=1.98%, ctx=1389, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=6342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 job10: (groupid=0, jobs=1): err= 0: pid=102723: Mon Jul 22 11:13:34 2024 01:16:30.932 read: IOPS=517, BW=129MiB/s (136MB/s)(1303MiB/10077msec) 01:16:30.932 slat (usec): min=20, max=94792, avg=1877.01, stdev=6560.22 01:16:30.932 clat (msec): min=29, max=230, avg=121.60, stdev=19.50 01:16:30.932 lat (msec): min=30, max=238, avg=123.48, stdev=20.47 01:16:30.932 clat percentiles (msec): 01:16:30.932 | 1.00th=[ 74], 5.00th=[ 95], 10.00th=[ 103], 20.00th=[ 108], 01:16:30.932 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 123], 01:16:30.932 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 161], 01:16:30.932 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 222], 01:16:30.932 | 99.99th=[ 230] 01:16:30.932 bw ( KiB/s): min=100352, max=148992, per=8.48%, avg=131718.55, stdev=14184.64, samples=20 01:16:30.932 iops : min= 392, max= 582, avg=514.30, stdev=55.42, samples=20 01:16:30.932 lat (msec) : 50=0.04%, 100=8.52%, 250=91.44% 01:16:30.932 cpu : usr=0.33%, sys=1.71%, ctx=960, majf=0, minf=4097 01:16:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:16:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:30.932 issued rwts: total=5211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:30.932 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:30.932 01:16:30.932 Run status group 0 (all jobs): 01:16:30.932 READ: bw=1518MiB/s (1591MB/s), 91.6MiB/s-240MiB/s (96.1MB/s-252MB/s), io=15.1GiB (16.2GB), run=10044-10188msec 01:16:30.932 01:16:30.932 Disk stats (read/write): 01:16:30.932 nvme0n1: ios=13682/0, merge=0/0, ticks=1242931/0, in_queue=1242931, util=97.51% 01:16:30.932 nvme10n1: ios=7837/0, merge=0/0, ticks=1241580/0, in_queue=1241580, util=97.67% 01:16:30.932 nvme1n1: ios=9217/0, merge=0/0, ticks=1236301/0, in_queue=1236301, util=97.78% 01:16:30.932 nvme2n1: ios=19203/0, merge=0/0, ticks=1235797/0, in_queue=1235797, util=97.58% 01:16:30.932 nvme3n1: ios=10444/0, merge=0/0, ticks=1231228/0, in_queue=1231228, util=97.92% 01:16:30.932 nvme4n1: ios=7314/0, merge=0/0, ticks=1237612/0, in_queue=1237612, util=98.39% 01:16:30.932 nvme5n1: ios=7307/0, merge=0/0, ticks=1235146/0, in_queue=1235146, util=98.15% 01:16:30.932 nvme6n1: ios=13030/0, merge=0/0, ticks=1234987/0, in_queue=1234987, util=98.30% 01:16:30.933 nvme7n1: ios=11553/0, merge=0/0, ticks=1239647/0, in_queue=1239647, util=98.74% 01:16:30.933 nvme8n1: ios=12581/0, merge=0/0, ticks=1236761/0, in_queue=1236761, util=98.33% 01:16:30.933 nvme9n1: ios=10325/0, merge=0/0, ticks=1242238/0, in_queue=1242238, util=98.95% 01:16:30.933 11:13:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 01:16:30.933 [global] 01:16:30.933 thread=1 01:16:30.933 invalidate=1 01:16:30.933 rw=randwrite 01:16:30.933 time_based=1 01:16:30.933 runtime=10 01:16:30.933 ioengine=libaio 01:16:30.933 direct=1 01:16:30.933 bs=262144 01:16:30.933 iodepth=64 01:16:30.933 norandommap=1 01:16:30.933 numjobs=1 01:16:30.933 01:16:30.933 [job0] 01:16:30.933 filename=/dev/nvme0n1 01:16:30.933 [job1] 01:16:30.933 filename=/dev/nvme10n1 01:16:30.933 [job2] 01:16:30.933 filename=/dev/nvme1n1 01:16:30.933 [job3] 01:16:30.933 filename=/dev/nvme2n1 01:16:30.933 [job4] 01:16:30.933 filename=/dev/nvme3n1 01:16:30.933 [job5] 01:16:30.933 filename=/dev/nvme4n1 01:16:30.933 [job6] 01:16:30.933 filename=/dev/nvme5n1 01:16:30.933 [job7] 01:16:30.933 filename=/dev/nvme6n1 01:16:30.933 [job8] 01:16:30.933 filename=/dev/nvme7n1 01:16:30.933 [job9] 01:16:30.933 filename=/dev/nvme8n1 01:16:30.933 [job10] 01:16:30.933 filename=/dev/nvme9n1 01:16:30.933 Could not set queue depth (nvme0n1) 01:16:30.933 Could not set queue depth (nvme10n1) 01:16:30.933 Could not set queue depth (nvme1n1) 01:16:30.933 Could not set queue depth (nvme2n1) 01:16:30.933 Could not set queue depth (nvme3n1) 01:16:30.933 Could not set queue depth (nvme4n1) 01:16:30.933 Could not set queue depth (nvme5n1) 01:16:30.933 Could not set queue depth (nvme6n1) 01:16:30.933 Could not set queue depth (nvme7n1) 01:16:30.933 Could not set queue depth (nvme8n1) 01:16:30.933 Could not set queue depth (nvme9n1) 01:16:30.933 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 01:16:30.933 fio-3.35 01:16:30.933 Starting 11 threads 01:16:40.904 01:16:40.904 job0: (groupid=0, jobs=1): err= 0: pid=102921: Mon Jul 22 11:13:44 2024 01:16:40.904 write: IOPS=447, BW=112MiB/s (117MB/s)(1133MiB/10122msec); 0 zone resets 01:16:40.904 slat (usec): min=27, max=31227, avg=2200.34, stdev=3768.81 01:16:40.904 clat (msec): min=17, max=260, avg=140.65, stdev=12.52 01:16:40.904 lat (msec): min=18, max=260, avg=142.85, stdev=12.13 01:16:40.904 clat percentiles (msec): 01:16:40.904 | 1.00th=[ 107], 5.00th=[ 132], 10.00th=[ 133], 20.00th=[ 136], 01:16:40.904 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 01:16:40.904 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 148], 95.00th=[ 148], 01:16:40.904 | 99.00th=[ 163], 99.50th=[ 207], 99.90th=[ 253], 99.95th=[ 253], 01:16:40.904 | 99.99th=[ 262] 01:16:40.904 bw ( KiB/s): min=109056, max=118784, per=8.69%, avg=114432.00, stdev=2570.76, samples=20 01:16:40.904 iops : min= 426, max= 464, avg=447.00, stdev=10.04, samples=20 01:16:40.904 lat (msec) : 20=0.09%, 50=0.26%, 100=0.62%, 250=98.90%, 500=0.13% 01:16:40.904 cpu : usr=1.27%, sys=1.44%, ctx=4418, majf=0, minf=1 01:16:40.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 01:16:40.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.904 issued rwts: total=0,4533,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.904 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.904 job1: (groupid=0, jobs=1): err= 0: pid=102922: Mon Jul 22 11:13:44 2024 01:16:40.904 write: IOPS=452, BW=113MiB/s (119MB/s)(1146MiB/10132msec); 0 zone resets 01:16:40.904 slat (usec): min=17, max=19809, avg=2175.50, stdev=3721.47 01:16:40.904 clat (msec): min=4, max=267, avg=139.24, stdev=14.01 01:16:40.904 lat (msec): min=4, max=277, avg=141.41, stdev=13.71 01:16:40.904 clat percentiles (msec): 01:16:40.904 | 1.00th=[ 87], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 01:16:40.904 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 01:16:40.904 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 146], 95.00th=[ 148], 01:16:40.904 | 99.00th=[ 169], 99.50th=[ 222], 99.90th=[ 259], 99.95th=[ 268], 01:16:40.904 | 99.99th=[ 268] 01:16:40.904 bw ( KiB/s): min=108032, max=119296, per=8.78%, avg=115712.00, stdev=2957.59, samples=20 01:16:40.904 iops : min= 422, max= 466, avg=452.00, stdev=11.55, samples=20 01:16:40.904 lat (msec) : 10=0.09%, 50=0.44%, 100=0.61%, 250=98.69%, 500=0.17% 01:16:40.904 cpu : usr=1.28%, sys=1.41%, ctx=3255, majf=0, minf=1 01:16:40.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 01:16:40.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.904 issued rwts: total=0,4583,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.904 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.904 job2: (groupid=0, jobs=1): err= 0: pid=102934: Mon Jul 22 11:13:44 2024 01:16:40.904 write: IOPS=416, BW=104MiB/s (109MB/s)(1056MiB/10139msec); 0 zone resets 01:16:40.904 slat (usec): min=21, max=28682, avg=2364.58, stdev=4062.03 01:16:40.904 clat (msec): min=3, max=281, avg=151.25, stdev=13.73 01:16:40.904 lat (msec): min=3, max=281, avg=153.62, stdev=13.30 01:16:40.904 clat percentiles (msec): 01:16:40.904 | 1.00th=[ 131], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 01:16:40.904 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 01:16:40.904 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 01:16:40.904 | 99.00th=[ 180], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 275], 01:16:40.904 | 99.99th=[ 284] 01:16:40.904 bw ( KiB/s): min=99014, max=110592, per=8.08%, avg=106480.30, stdev=2417.32, samples=20 01:16:40.904 iops : min= 386, max= 432, avg=415.90, stdev= 9.57, samples=20 01:16:40.904 lat (msec) : 4=0.02%, 50=0.28%, 100=0.57%, 250=98.79%, 500=0.33% 01:16:40.904 cpu : usr=0.83%, sys=1.09%, ctx=5460, majf=0, minf=1 01:16:40.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 01:16:40.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.904 issued rwts: total=0,4222,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.904 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.904 job3: (groupid=0, jobs=1): err= 0: pid=102935: Mon Jul 22 11:13:44 2024 01:16:40.904 write: IOPS=415, BW=104MiB/s (109MB/s)(1052MiB/10138msec); 0 zone resets 01:16:40.904 slat (usec): min=21, max=56013, avg=2369.54, stdev=4114.27 01:16:40.904 clat (msec): min=10, max=286, avg=151.67, stdev=12.24 01:16:40.904 lat (msec): min=10, max=286, avg=154.04, stdev=11.69 01:16:40.904 clat percentiles (msec): 01:16:40.904 | 1.00th=[ 138], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 01:16:40.904 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 01:16:40.904 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 01:16:40.904 | 99.00th=[ 194], 99.50th=[ 232], 99.90th=[ 275], 99.95th=[ 275], 01:16:40.904 | 99.99th=[ 288] 01:16:40.904 bw ( KiB/s): min=93370, max=110592, per=8.06%, avg=106121.30, stdev=3395.27, samples=20 01:16:40.904 iops : min= 364, max= 432, avg=414.50, stdev=13.41, samples=20 01:16:40.904 lat (msec) : 20=0.05%, 50=0.10%, 100=0.29%, 250=99.26%, 500=0.31% 01:16:40.904 cpu : usr=0.91%, sys=1.33%, ctx=6746, majf=0, minf=1 01:16:40.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 01:16:40.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.904 issued rwts: total=0,4208,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.904 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.904 job4: (groupid=0, jobs=1): err= 0: pid=102936: Mon Jul 22 11:13:44 2024 01:16:40.904 write: IOPS=450, BW=113MiB/s (118MB/s)(1141MiB/10129msec); 0 zone resets 01:16:40.904 slat (usec): min=21, max=12149, avg=2165.89, stdev=3733.31 01:16:40.904 clat (msec): min=4, max=272, avg=139.77, stdev=14.42 01:16:40.904 lat (msec): min=4, max=272, avg=141.93, stdev=14.16 01:16:40.904 clat percentiles (msec): 01:16:40.904 | 1.00th=[ 89], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 01:16:40.904 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 01:16:40.904 | 70.00th=[ 144], 80.00th=[ 144], 90.00th=[ 146], 95.00th=[ 148], 01:16:40.904 | 99.00th=[ 174], 99.50th=[ 218], 99.90th=[ 264], 99.95th=[ 264], 01:16:40.904 | 99.99th=[ 271] 01:16:40.904 bw ( KiB/s): min=107008, max=124928, per=8.75%, avg=115251.20, stdev=3424.14, samples=20 01:16:40.904 iops : min= 418, max= 488, avg=450.20, stdev=13.38, samples=20 01:16:40.904 lat (msec) : 10=0.11%, 20=0.11%, 50=0.33%, 100=0.68%, 250=98.55% 01:16:40.904 lat (msec) : 500=0.22% 01:16:40.904 cpu : usr=0.98%, sys=1.16%, ctx=4556, majf=0, minf=1 01:16:40.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 01:16:40.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.905 issued rwts: total=0,4565,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.905 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.905 job5: (groupid=0, jobs=1): err= 0: pid=102937: Mon Jul 22 11:13:44 2024 01:16:40.905 write: IOPS=452, BW=113MiB/s (119MB/s)(1145MiB/10129msec); 0 zone resets 01:16:40.905 slat (usec): min=27, max=18468, avg=2178.02, stdev=3705.37 01:16:40.905 clat (msec): min=21, max=270, avg=139.31, stdev=13.50 01:16:40.905 lat (msec): min=21, max=270, avg=141.48, stdev=13.18 01:16:40.905 clat percentiles (msec): 01:16:40.905 | 1.00th=[ 96], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 01:16:40.905 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 01:16:40.905 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 148], 01:16:40.905 | 99.00th=[ 171], 99.50th=[ 218], 99.90th=[ 262], 99.95th=[ 262], 01:16:40.905 | 99.99th=[ 271] 01:16:40.905 bw ( KiB/s): min=109056, max=119535, per=8.78%, avg=115647.15, stdev=2885.41, samples=20 01:16:40.905 iops : min= 426, max= 466, avg=451.70, stdev=11.21, samples=20 01:16:40.905 lat (msec) : 50=0.55%, 100=0.52%, 250=98.71%, 500=0.22% 01:16:40.905 cpu : usr=1.20%, sys=1.39%, ctx=6883, majf=0, minf=1 01:16:40.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 01:16:40.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.905 issued rwts: total=0,4580,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.905 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.905 job6: (groupid=0, jobs=1): err= 0: pid=102938: Mon Jul 22 11:13:44 2024 01:16:40.905 write: IOPS=449, BW=112MiB/s (118MB/s)(1139MiB/10128msec); 0 zone resets 01:16:40.905 slat (usec): min=26, max=57626, avg=2170.54, stdev=3789.26 01:16:40.905 clat (msec): min=60, max=269, avg=140.00, stdev=11.16 01:16:40.905 lat (msec): min=60, max=270, avg=142.17, stdev=10.65 01:16:40.905 clat percentiles (msec): 01:16:40.905 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 01:16:40.905 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 01:16:40.905 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 148], 01:16:40.905 | 99.00th=[ 180], 99.50th=[ 218], 99.90th=[ 262], 99.95th=[ 262], 01:16:40.905 | 99.99th=[ 271] 01:16:40.905 bw ( KiB/s): min=104448, max=118784, per=8.73%, avg=115020.80, stdev=3613.81, samples=20 01:16:40.905 iops : min= 408, max= 464, avg=449.30, stdev=14.12, samples=20 01:16:40.905 lat (msec) : 100=0.55%, 250=99.23%, 500=0.22% 01:16:40.905 cpu : usr=1.17%, sys=1.66%, ctx=5569, majf=0, minf=1 01:16:40.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 01:16:40.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.905 issued rwts: total=0,4556,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.905 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.905 job7: (groupid=0, jobs=1): err= 0: pid=102939: Mon Jul 22 11:13:44 2024 01:16:40.905 write: IOPS=596, BW=149MiB/s (156MB/s)(1506MiB/10093msec); 0 zone resets 01:16:40.905 slat (usec): min=19, max=37988, avg=1654.12, stdev=2834.33 01:16:40.905 clat (msec): min=6, max=195, avg=105.48, stdev= 8.28 01:16:40.905 lat (msec): min=6, max=195, avg=107.14, stdev= 7.90 01:16:40.905 clat percentiles (msec): 01:16:40.905 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 102], 01:16:40.905 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 107], 01:16:40.905 | 70.00th=[ 108], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 109], 01:16:40.905 | 99.00th=[ 148], 99.50th=[ 163], 99.90th=[ 182], 99.95th=[ 188], 01:16:40.905 | 99.99th=[ 197] 01:16:40.905 bw ( KiB/s): min=124928, max=157696, per=11.59%, avg=152627.20, stdev=6773.94, samples=20 01:16:40.905 iops : min= 488, max= 616, avg=596.20, stdev=26.46, samples=20 01:16:40.905 lat (msec) : 10=0.03%, 20=0.03%, 100=12.05%, 250=87.88% 01:16:40.905 cpu : usr=1.07%, sys=1.86%, ctx=6835, majf=0, minf=1 01:16:40.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:16:40.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.905 issued rwts: total=0,6025,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.905 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.905 job8: (groupid=0, jobs=1): err= 0: pid=102940: Mon Jul 22 11:13:44 2024 01:16:40.905 write: IOPS=453, BW=113MiB/s (119MB/s)(1148MiB/10128msec); 0 zone resets 01:16:40.905 slat (usec): min=20, max=20226, avg=2146.75, stdev=3726.82 01:16:40.905 clat (msec): min=7, max=264, avg=139.01, stdev=16.11 01:16:40.905 lat (msec): min=7, max=264, avg=141.16, stdev=16.00 01:16:40.905 clat percentiles (msec): 01:16:40.905 | 1.00th=[ 65], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 01:16:40.905 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 01:16:40.905 | 70.00th=[ 144], 80.00th=[ 144], 90.00th=[ 146], 95.00th=[ 148], 01:16:40.905 | 99.00th=[ 165], 99.50th=[ 211], 99.90th=[ 255], 99.95th=[ 255], 01:16:40.905 | 99.99th=[ 266] 01:16:40.905 bw ( KiB/s): min=109056, max=136192, per=8.80%, avg=115891.20, stdev=5334.52, samples=20 01:16:40.905 iops : min= 426, max= 532, avg=452.70, stdev=20.84, samples=20 01:16:40.905 lat (msec) : 10=0.02%, 20=0.24%, 50=0.59%, 100=1.55%, 250=97.47% 01:16:40.905 lat (msec) : 500=0.13% 01:16:40.905 cpu : usr=0.94%, sys=1.28%, ctx=5126, majf=0, minf=1 01:16:40.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 01:16:40.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.905 issued rwts: total=0,4590,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.905 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.905 job9: (groupid=0, jobs=1): err= 0: pid=102941: Mon Jul 22 11:13:44 2024 01:16:40.905 write: IOPS=601, BW=150MiB/s (158MB/s)(1518MiB/10093msec); 0 zone resets 01:16:40.905 slat (usec): min=18, max=21076, avg=1627.18, stdev=2795.70 01:16:40.905 clat (msec): min=6, max=192, avg=104.69, stdev=10.08 01:16:40.905 lat (msec): min=6, max=192, avg=106.32, stdev= 9.83 01:16:40.905 clat percentiles (msec): 01:16:40.905 | 1.00th=[ 59], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 102], 01:16:40.905 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 107], 01:16:40.905 | 70.00th=[ 108], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 109], 01:16:40.905 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 186], 01:16:40.905 | 99.99th=[ 192] 01:16:40.905 bw ( KiB/s): min=147238, max=158208, per=11.68%, avg=153845.10, stdev=2485.67, samples=20 01:16:40.905 iops : min= 575, max= 618, avg=600.95, stdev= 9.73, samples=20 01:16:40.905 lat (msec) : 10=0.03%, 20=0.13%, 50=0.54%, 100=12.37%, 250=86.92% 01:16:40.905 cpu : usr=1.05%, sys=1.58%, ctx=6913, majf=0, minf=1 01:16:40.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:16:40.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.905 issued rwts: total=0,6072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.905 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.905 job10: (groupid=0, jobs=1): err= 0: pid=102942: Mon Jul 22 11:13:44 2024 01:16:40.905 write: IOPS=417, BW=104MiB/s (109MB/s)(1058MiB/10136msec); 0 zone resets 01:16:40.905 slat (usec): min=21, max=21082, avg=2355.78, stdev=4028.11 01:16:40.905 clat (msec): min=21, max=280, avg=150.83, stdev=14.43 01:16:40.905 lat (msec): min=21, max=280, avg=153.18, stdev=14.07 01:16:40.905 clat percentiles (msec): 01:16:40.905 | 1.00th=[ 88], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 01:16:40.905 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 01:16:40.905 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 01:16:40.905 | 99.00th=[ 178], 99.50th=[ 236], 99.90th=[ 271], 99.95th=[ 271], 01:16:40.905 | 99.99th=[ 279] 01:16:40.905 bw ( KiB/s): min=104448, max=110592, per=8.10%, avg=106741.35, stdev=1446.15, samples=20 01:16:40.905 iops : min= 408, max= 432, avg=416.95, stdev= 5.65, samples=20 01:16:40.905 lat (msec) : 50=0.57%, 100=0.47%, 250=98.63%, 500=0.33% 01:16:40.905 cpu : usr=0.93%, sys=1.42%, ctx=7355, majf=0, minf=1 01:16:40.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 01:16:40.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:16:40.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 01:16:40.905 issued rwts: total=0,4233,0,0 short=0,0,0,0 dropped=0,0,0,0 01:16:40.905 latency : target=0, window=0, percentile=100.00%, depth=64 01:16:40.905 01:16:40.905 Run status group 0 (all jobs): 01:16:40.905 WRITE: bw=1286MiB/s (1349MB/s), 104MiB/s-150MiB/s (109MB/s-158MB/s), io=12.7GiB (13.7GB), run=10093-10139msec 01:16:40.905 01:16:40.905 Disk stats (read/write): 01:16:40.905 nvme0n1: ios=49/8902, merge=0/0, ticks=50/1209184, in_queue=1209234, util=97.60% 01:16:40.905 nvme10n1: ios=49/9015, merge=0/0, ticks=81/1210904, in_queue=1210985, util=98.03% 01:16:40.905 nvme1n1: ios=27/8289, merge=0/0, ticks=32/1209373, in_queue=1209405, util=97.84% 01:16:40.905 nvme2n1: ios=0/8265, merge=0/0, ticks=0/1209520, in_queue=1209520, util=97.87% 01:16:40.905 nvme3n1: ios=0/8986, merge=0/0, ticks=0/1212116, in_queue=1212116, util=98.02% 01:16:40.905 nvme4n1: ios=0/9006, merge=0/0, ticks=0/1209787, in_queue=1209787, util=98.19% 01:16:40.905 nvme5n1: ios=0/8957, merge=0/0, ticks=0/1210415, in_queue=1210415, util=98.26% 01:16:40.905 nvme6n1: ios=0/11881, merge=0/0, ticks=0/1211779, in_queue=1211779, util=98.40% 01:16:40.905 nvme7n1: ios=0/9022, merge=0/0, ticks=0/1210792, in_queue=1210792, util=98.63% 01:16:40.905 nvme8n1: ios=0/11972, merge=0/0, ticks=0/1212387, in_queue=1212387, util=98.75% 01:16:40.905 nvme9n1: ios=0/8309, merge=0/0, ticks=0/1209253, in_queue=1209253, util=98.82% 01:16:40.905 11:13:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 01:16:40.905 11:13:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 01:16:40.905 11:13:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.905 11:13:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:16:40.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 01:16:40.905 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 01:16:40.905 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 01:16:40.905 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.905 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 01:16:40.906 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 01:16:40.906 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 01:16:40.906 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 01:16:40.906 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:40.906 11:13:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 01:16:40.906 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:40.906 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 01:16:40.906 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:40.906 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:40.906 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:40.906 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:40.906 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 01:16:41.165 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 01:16:41.165 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:41.165 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 01:16:41.423 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:16:41.423 rmmod nvme_tcp 01:16:41.423 rmmod nvme_fabrics 01:16:41.423 rmmod nvme_keyring 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 102231 ']' 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 102231 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 102231 ']' 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 102231 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102231 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:16:41.423 killing process with pid 102231 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102231' 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 102231 01:16:41.423 11:13:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 102231 01:16:42.356 11:13:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:16:42.356 11:13:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:16:42.356 11:13:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:16:42.356 11:13:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:16:42.356 11:13:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 01:16:42.356 11:13:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:42.357 11:13:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:42.357 11:13:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:42.357 11:13:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:16:42.357 01:16:42.357 real 0m50.262s 01:16:42.357 user 2m51.034s 01:16:42.357 sys 0m23.998s 01:16:42.357 11:13:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 01:16:42.357 ************************************ 01:16:42.357 END TEST nvmf_multiconnection 01:16:42.357 ************************************ 01:16:42.357 11:13:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 01:16:42.357 11:13:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:16:42.357 11:13:47 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 01:16:42.357 11:13:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:16:42.357 11:13:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:16:42.357 11:13:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:16:42.357 ************************************ 01:16:42.357 START TEST nvmf_initiator_timeout 01:16:42.357 ************************************ 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 01:16:42.357 * Looking for test storage... 01:16:42.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:16:42.357 Cannot find device "nvmf_tgt_br" 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:16:42.357 Cannot find device "nvmf_tgt_br2" 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:16:42.357 Cannot find device "nvmf_tgt_br" 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:16:42.357 Cannot find device "nvmf_tgt_br2" 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:16:42.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:16:42.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:16:42.357 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:16:42.358 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:16:42.616 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:16:42.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:16:42.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 01:16:42.617 01:16:42.617 --- 10.0.0.2 ping statistics --- 01:16:42.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:42.617 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:16:42.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:16:42.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 01:16:42.617 01:16:42.617 --- 10.0.0.3 ping statistics --- 01:16:42.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:42.617 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:16:42.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:16:42.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 01:16:42.617 01:16:42.617 --- 10.0.0.1 ping statistics --- 01:16:42.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:42.617 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=103312 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 103312 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 103312 ']' 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:42.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:16:42.617 11:13:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:42.617 [2024-07-22 11:13:47.798421] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:16:42.617 [2024-07-22 11:13:47.798501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:16:42.875 [2024-07-22 11:13:47.943862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:16:42.875 [2024-07-22 11:13:48.013681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:16:42.875 [2024-07-22 11:13:48.013753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:16:42.875 [2024-07-22 11:13:48.013769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:16:42.875 [2024-07-22 11:13:48.013780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:16:42.875 [2024-07-22 11:13:48.013789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:16:42.875 [2024-07-22 11:13:48.013987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:16:42.875 [2024-07-22 11:13:48.014610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:16:42.875 [2024-07-22 11:13:48.014771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:16:42.875 [2024-07-22 11:13:48.014914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:43.805 Malloc0 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:43.805 Delay0 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:43.805 [2024-07-22 11:13:48.796851] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:43.805 [2024-07-22 11:13:48.825064] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 01:16:43.805 11:13:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 01:16:43.805 11:13:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 01:16:43.805 11:13:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 01:16:43.805 11:13:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 01:16:43.805 11:13:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=103391 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 01:16:46.388 11:13:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 01:16:46.388 [global] 01:16:46.388 thread=1 01:16:46.388 invalidate=1 01:16:46.388 rw=write 01:16:46.388 time_based=1 01:16:46.388 runtime=60 01:16:46.388 ioengine=libaio 01:16:46.388 direct=1 01:16:46.388 bs=4096 01:16:46.388 iodepth=1 01:16:46.388 norandommap=0 01:16:46.388 numjobs=1 01:16:46.388 01:16:46.388 verify_dump=1 01:16:46.388 verify_backlog=512 01:16:46.388 verify_state_save=0 01:16:46.388 do_verify=1 01:16:46.388 verify=crc32c-intel 01:16:46.388 [job0] 01:16:46.388 filename=/dev/nvme0n1 01:16:46.388 Could not set queue depth (nvme0n1) 01:16:46.388 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:16:46.388 fio-3.35 01:16:46.388 Starting 1 thread 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:48.916 true 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:48.916 true 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:48.916 true 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:48.916 true 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:48.916 11:13:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:52.194 true 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:52.194 true 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:52.194 true 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:16:52.194 true 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 01:16:52.194 11:13:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 103391 01:17:48.403 01:17:48.403 job0: (groupid=0, jobs=1): err= 0: pid=103418: Mon Jul 22 11:14:51 2024 01:17:48.403 read: IOPS=836, BW=3345KiB/s (3425kB/s)(196MiB/60000msec) 01:17:48.403 slat (usec): min=13, max=12729, avg=17.19, stdev=79.84 01:17:48.403 clat (usec): min=153, max=40691k, avg=1003.65, stdev=181657.27 01:17:48.403 lat (usec): min=169, max=40691k, avg=1020.84, stdev=181657.28 01:17:48.403 clat percentiles (usec): 01:17:48.403 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 01:17:48.403 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 01:17:48.403 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 01:17:48.403 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 289], 99.95th=[ 371], 01:17:48.403 | 99.99th=[ 570] 01:17:48.403 write: IOPS=839, BW=3358KiB/s (3439kB/s)(197MiB/60000msec); 0 zone resets 01:17:48.403 slat (usec): min=16, max=692, avg=24.15, stdev= 7.33 01:17:48.403 clat (usec): min=106, max=1611, avg=146.40, stdev=15.84 01:17:48.403 lat (usec): min=140, max=1634, avg=170.54, stdev=17.42 01:17:48.403 clat percentiles (usec): 01:17:48.403 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 01:17:48.403 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 01:17:48.403 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 01:17:48.403 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 233], 99.95th=[ 306], 01:17:48.403 | 99.99th=[ 453] 01:17:48.403 bw ( KiB/s): min= 1856, max=12288, per=100.00%, avg=10082.46, stdev=2158.45, samples=39 01:17:48.403 iops : min= 464, max= 3072, avg=2520.62, stdev=539.61, samples=39 01:17:48.403 lat (usec) : 250=99.81%, 500=0.18%, 750=0.01% 01:17:48.403 lat (msec) : 2=0.01%, >=2000=0.01% 01:17:48.403 cpu : usr=0.59%, sys=2.46%, ctx=100564, majf=0, minf=2 01:17:48.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:17:48.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:17:48.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:17:48.403 issued rwts: total=50176,50374,0,0 short=0,0,0,0 dropped=0,0,0,0 01:17:48.403 latency : target=0, window=0, percentile=100.00%, depth=1 01:17:48.403 01:17:48.403 Run status group 0 (all jobs): 01:17:48.403 READ: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=196MiB (206MB), run=60000-60000msec 01:17:48.403 WRITE: bw=3358KiB/s (3439kB/s), 3358KiB/s-3358KiB/s (3439kB/s-3439kB/s), io=197MiB (206MB), run=60000-60000msec 01:17:48.403 01:17:48.403 Disk stats (read/write): 01:17:48.403 nvme0n1: ios=50125/50176, merge=0/0, ticks=10449/8399, in_queue=18848, util=99.84% 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:17:48.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 01:17:48.403 nvmf hotplug test: fio successful as expected 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:17:48.403 rmmod nvme_tcp 01:17:48.403 rmmod nvme_fabrics 01:17:48.403 rmmod nvme_keyring 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 103312 ']' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 103312 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 103312 ']' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 103312 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103312 01:17:48.403 killing process with pid 103312 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103312' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 103312 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 103312 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:17:48.403 ************************************ 01:17:48.403 END TEST nvmf_initiator_timeout 01:17:48.403 ************************************ 01:17:48.403 01:17:48.403 real 1m4.456s 01:17:48.403 user 4m6.982s 01:17:48.403 sys 0m7.788s 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:48.403 11:14:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:17:48.403 11:14:51 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 01:17:48.403 11:14:51 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:48.403 11:14:51 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:48.403 11:14:51 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 01:17:48.403 11:14:51 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:48.403 11:14:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:48.403 ************************************ 01:17:48.403 START TEST nvmf_multicontroller 01:17:48.403 ************************************ 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:17:48.403 * Looking for test storage... 01:17:48.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:48.403 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:17:48.404 Cannot find device "nvmf_tgt_br" 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:17:48.404 Cannot find device "nvmf_tgt_br2" 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:17:48.404 11:14:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:17:48.404 Cannot find device "nvmf_tgt_br" 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:17:48.404 Cannot find device "nvmf_tgt_br2" 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:48.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:48.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:17:48.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:48.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 01:17:48.404 01:17:48.404 --- 10.0.0.2 ping statistics --- 01:17:48.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:48.404 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:17:48.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:48.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 01:17:48.404 01:17:48.404 --- 10.0.0.3 ping statistics --- 01:17:48.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:48.404 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:17:48.404 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:48.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:48.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:17:48.405 01:17:48.405 --- 10.0.0.1 ping statistics --- 01:17:48.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:48.405 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=104243 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 104243 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104243 ']' 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:48.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 [2024-07-22 11:14:52.343457] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:48.405 [2024-07-22 11:14:52.343519] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:48.405 [2024-07-22 11:14:52.480497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:17:48.405 [2024-07-22 11:14:52.550844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:48.405 [2024-07-22 11:14:52.550916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:48.405 [2024-07-22 11:14:52.550926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:48.405 [2024-07-22 11:14:52.550934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:48.405 [2024-07-22 11:14:52.550940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:48.405 [2024-07-22 11:14:52.551112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:17:48.405 [2024-07-22 11:14:52.551615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:17:48.405 [2024-07-22 11:14:52.551635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 [2024-07-22 11:14:52.744391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 Malloc0 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 [2024-07-22 11:14:52.817518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 [2024-07-22 11:14:52.825396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 Malloc1 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=104283 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 104283 /var/tmp/bdevperf.sock 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104283 ']' 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:17:48.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:48.405 11:14:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.971 NVMe0n1 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.971 11:14:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.971 1 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.971 2024/07/22 11:14:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:17:48.971 request: 01:17:48.971 { 01:17:48.971 "method": "bdev_nvme_attach_controller", 01:17:48.971 "params": { 01:17:48.971 "name": "NVMe0", 01:17:48.971 "trtype": "tcp", 01:17:48.971 "traddr": "10.0.0.2", 01:17:48.971 "adrfam": "ipv4", 01:17:48.971 "trsvcid": "4420", 01:17:48.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:17:48.971 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 01:17:48.971 "hostaddr": "10.0.0.2", 01:17:48.971 "hostsvcid": "60000", 01:17:48.971 "prchk_reftag": false, 01:17:48.971 "prchk_guard": false, 01:17:48.971 "hdgst": false, 01:17:48.971 "ddgst": false 01:17:48.971 } 01:17:48.971 } 01:17:48.971 Got JSON-RPC error response 01:17:48.971 GoRPCClient: error on JSON-RPC call 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.971 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.972 2024/07/22 11:14:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:17:48.972 request: 01:17:48.972 { 01:17:48.972 "method": "bdev_nvme_attach_controller", 01:17:48.972 "params": { 01:17:48.972 "name": "NVMe0", 01:17:48.972 "trtype": "tcp", 01:17:48.972 "traddr": "10.0.0.2", 01:17:48.972 "adrfam": "ipv4", 01:17:48.972 "trsvcid": "4420", 01:17:48.972 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:17:48.972 "hostaddr": "10.0.0.2", 01:17:48.972 "hostsvcid": "60000", 01:17:48.972 "prchk_reftag": false, 01:17:48.972 "prchk_guard": false, 01:17:48.972 "hdgst": false, 01:17:48.972 "ddgst": false 01:17:48.972 } 01:17:48.972 } 01:17:48.972 Got JSON-RPC error response 01:17:48.972 GoRPCClient: error on JSON-RPC call 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.972 2024/07/22 11:14:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 01:17:48.972 request: 01:17:48.972 { 01:17:48.972 "method": "bdev_nvme_attach_controller", 01:17:48.972 "params": { 01:17:48.972 "name": "NVMe0", 01:17:48.972 "trtype": "tcp", 01:17:48.972 "traddr": "10.0.0.2", 01:17:48.972 "adrfam": "ipv4", 01:17:48.972 "trsvcid": "4420", 01:17:48.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:17:48.972 "hostaddr": "10.0.0.2", 01:17:48.972 "hostsvcid": "60000", 01:17:48.972 "prchk_reftag": false, 01:17:48.972 "prchk_guard": false, 01:17:48.972 "hdgst": false, 01:17:48.972 "ddgst": false, 01:17:48.972 "multipath": "disable" 01:17:48.972 } 01:17:48.972 } 01:17:48.972 Got JSON-RPC error response 01:17:48.972 GoRPCClient: error on JSON-RPC call 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.972 2024/07/22 11:14:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:17:48.972 request: 01:17:48.972 { 01:17:48.972 "method": "bdev_nvme_attach_controller", 01:17:48.972 "params": { 01:17:48.972 "name": "NVMe0", 01:17:48.972 "trtype": "tcp", 01:17:48.972 "traddr": "10.0.0.2", 01:17:48.972 "adrfam": "ipv4", 01:17:48.972 "trsvcid": "4420", 01:17:48.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:17:48.972 "hostaddr": "10.0.0.2", 01:17:48.972 "hostsvcid": "60000", 01:17:48.972 "prchk_reftag": false, 01:17:48.972 "prchk_guard": false, 01:17:48.972 "hdgst": false, 01:17:48.972 "ddgst": false, 01:17:48.972 "multipath": "failover" 01:17:48.972 } 01:17:48.972 } 01:17:48.972 Got JSON-RPC error response 01:17:48.972 GoRPCClient: error on JSON-RPC call 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.972 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:48.972 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:49.229 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 01:17:49.229 11:14:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:17:50.603 0 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 104283 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104283 ']' 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104283 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104283 01:17:50.603 killing process with pid 104283 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104283' 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104283 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104283 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 01:17:50.603 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 01:17:50.603 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:17:50.603 [2024-07-22 11:14:52.951852] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:50.603 [2024-07-22 11:14:52.951953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104283 ] 01:17:50.603 [2024-07-22 11:14:53.086410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:50.603 [2024-07-22 11:14:53.155704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:17:50.603 [2024-07-22 11:14:54.231421] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 53d06db9-0c39-4d6f-82cf-4534b23fb10e already exists 01:17:50.603 [2024-07-22 11:14:54.231469] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:53d06db9-0c39-4d6f-82cf-4534b23fb10e alias for bdev NVMe1n1 01:17:50.603 [2024-07-22 11:14:54.231501] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 01:17:50.603 Running I/O for 1 seconds... 01:17:50.603 01:17:50.603 Latency(us) 01:17:50.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:50.603 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 01:17:50.603 NVMe0n1 : 1.01 23086.80 90.18 0.00 0.00 5532.91 2457.60 14715.81 01:17:50.603 =================================================================================================================== 01:17:50.603 Total : 23086.80 90.18 0.00 0.00 5532.91 2457.60 14715.81 01:17:50.603 Received shutdown signal, test time was about 1.000000 seconds 01:17:50.603 01:17:50.603 Latency(us) 01:17:50.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:50.604 =================================================================================================================== 01:17:50.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:50.604 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 01:17:50.604 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:17:50.604 rmmod nvme_tcp 01:17:50.604 rmmod nvme_fabrics 01:17:50.604 rmmod nvme_keyring 01:17:50.861 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 104243 ']' 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 104243 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104243 ']' 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104243 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104243 01:17:50.862 killing process with pid 104243 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104243' 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104243 01:17:50.862 11:14:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104243 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:17:51.121 01:17:51.121 real 0m4.365s 01:17:51.121 user 0m13.650s 01:17:51.121 sys 0m1.065s 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:51.121 11:14:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:17:51.121 ************************************ 01:17:51.121 END TEST nvmf_multicontroller 01:17:51.121 ************************************ 01:17:51.121 11:14:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:17:51.121 11:14:56 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:17:51.121 11:14:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:17:51.121 11:14:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:51.121 11:14:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:51.121 ************************************ 01:17:51.121 START TEST nvmf_aer 01:17:51.121 ************************************ 01:17:51.121 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:17:51.381 * Looking for test storage... 01:17:51.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:17:51.381 Cannot find device "nvmf_tgt_br" 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:17:51.381 Cannot find device "nvmf_tgt_br2" 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:17:51.381 Cannot find device "nvmf_tgt_br" 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:17:51.381 Cannot find device "nvmf_tgt_br2" 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:51.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:51.381 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:51.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:51.382 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:17:51.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:51.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 01:17:51.640 01:17:51.640 --- 10.0.0.2 ping statistics --- 01:17:51.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:51.640 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:17:51.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:51.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 01:17:51.640 01:17:51.640 --- 10.0.0.3 ping statistics --- 01:17:51.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:51.640 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:51.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:51.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 01:17:51.640 01:17:51.640 --- 10.0.0.1 ping statistics --- 01:17:51.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:51.640 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:51.640 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=104532 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 104532 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 104532 ']' 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:51.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:51.641 11:14:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:51.641 [2024-07-22 11:14:56.771564] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:51.641 [2024-07-22 11:14:56.771658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:51.899 [2024-07-22 11:14:56.915510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:17:51.899 [2024-07-22 11:14:56.986712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:51.899 [2024-07-22 11:14:56.986785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:51.899 [2024-07-22 11:14:56.986800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:51.899 [2024-07-22 11:14:56.986811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:51.899 [2024-07-22 11:14:56.986821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:51.899 [2024-07-22 11:14:56.987010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:51.899 [2024-07-22 11:14:56.987077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:17:51.899 [2024-07-22 11:14:56.987203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:17:51.899 [2024-07-22 11:14:56.987214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:52.832 [2024-07-22 11:14:57.742624] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:52.832 Malloc0 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:52.832 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:52.833 [2024-07-22 11:14:57.820173] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:52.833 [ 01:17:52.833 { 01:17:52.833 "allow_any_host": true, 01:17:52.833 "hosts": [], 01:17:52.833 "listen_addresses": [], 01:17:52.833 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:17:52.833 "subtype": "Discovery" 01:17:52.833 }, 01:17:52.833 { 01:17:52.833 "allow_any_host": true, 01:17:52.833 "hosts": [], 01:17:52.833 "listen_addresses": [ 01:17:52.833 { 01:17:52.833 "adrfam": "IPv4", 01:17:52.833 "traddr": "10.0.0.2", 01:17:52.833 "trsvcid": "4420", 01:17:52.833 "trtype": "TCP" 01:17:52.833 } 01:17:52.833 ], 01:17:52.833 "max_cntlid": 65519, 01:17:52.833 "max_namespaces": 2, 01:17:52.833 "min_cntlid": 1, 01:17:52.833 "model_number": "SPDK bdev Controller", 01:17:52.833 "namespaces": [ 01:17:52.833 { 01:17:52.833 "bdev_name": "Malloc0", 01:17:52.833 "name": "Malloc0", 01:17:52.833 "nguid": "BD4A685BE89E4F1F9EFC94A8DA245797", 01:17:52.833 "nsid": 1, 01:17:52.833 "uuid": "bd4a685b-e89e-4f1f-9efc-94a8da245797" 01:17:52.833 } 01:17:52.833 ], 01:17:52.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:17:52.833 "serial_number": "SPDK00000000000001", 01:17:52.833 "subtype": "NVMe" 01:17:52.833 } 01:17:52.833 ] 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=104586 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 01:17:52.833 11:14:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:53.091 Malloc1 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:53.091 [ 01:17:53.091 { 01:17:53.091 "allow_any_host": true, 01:17:53.091 "hosts": [], 01:17:53.091 "listen_addresses": [], 01:17:53.091 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:17:53.091 "subtype": "Discovery" 01:17:53.091 }, 01:17:53.091 { 01:17:53.091 "allow_any_host": true, 01:17:53.091 "hosts": [], 01:17:53.091 "listen_addresses": [ 01:17:53.091 { 01:17:53.091 "adrfam": "IPv4", 01:17:53.091 "traddr": "10.0.0.2", 01:17:53.091 "trsvcid": "4420", 01:17:53.091 "trtype": "TCP" 01:17:53.091 } 01:17:53.091 ], 01:17:53.091 "max_cntlid": 65519, 01:17:53.091 "max_namespaces": 2, 01:17:53.091 Asynchronous Event Request test 01:17:53.091 Attaching to 10.0.0.2 01:17:53.091 Attached to 10.0.0.2 01:17:53.091 Registering asynchronous event callbacks... 01:17:53.091 Starting namespace attribute notice tests for all controllers... 01:17:53.091 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 01:17:53.091 aer_cb - Changed Namespace 01:17:53.091 Cleaning up... 01:17:53.091 "min_cntlid": 1, 01:17:53.091 "model_number": "SPDK bdev Controller", 01:17:53.091 "namespaces": [ 01:17:53.091 { 01:17:53.091 "bdev_name": "Malloc0", 01:17:53.091 "name": "Malloc0", 01:17:53.091 "nguid": "BD4A685BE89E4F1F9EFC94A8DA245797", 01:17:53.091 "nsid": 1, 01:17:53.091 "uuid": "bd4a685b-e89e-4f1f-9efc-94a8da245797" 01:17:53.091 }, 01:17:53.091 { 01:17:53.091 "bdev_name": "Malloc1", 01:17:53.091 "name": "Malloc1", 01:17:53.091 "nguid": "56D5047666AC4BC88458AADBDB2F08AB", 01:17:53.091 "nsid": 2, 01:17:53.091 "uuid": "56d50476-66ac-4bc8-8458-aadbdb2f08ab" 01:17:53.091 } 01:17:53.091 ], 01:17:53.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:17:53.091 "serial_number": "SPDK00000000000001", 01:17:53.091 "subtype": "NVMe" 01:17:53.091 } 01:17:53.091 ] 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 104586 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:53.091 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 01:17:53.092 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:17:53.092 rmmod nvme_tcp 01:17:53.092 rmmod nvme_fabrics 01:17:53.092 rmmod nvme_keyring 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 104532 ']' 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 104532 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 104532 ']' 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 104532 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104532 01:17:53.350 killing process with pid 104532 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104532' 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 104532 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 104532 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:53.350 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:53.608 11:14:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:17:53.608 ************************************ 01:17:53.608 END TEST nvmf_aer 01:17:53.608 ************************************ 01:17:53.608 01:17:53.608 real 0m2.330s 01:17:53.608 user 0m6.356s 01:17:53.608 sys 0m0.701s 01:17:53.608 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:53.608 11:14:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:17:53.608 11:14:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:17:53.608 11:14:58 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:17:53.608 11:14:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:17:53.608 11:14:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:53.608 11:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:53.608 ************************************ 01:17:53.608 START TEST nvmf_async_init 01:17:53.608 ************************************ 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:17:53.608 * Looking for test storage... 01:17:53.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:53.608 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d1d5a3e4e702437f82f700ec10c4a737 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:17:53.609 Cannot find device "nvmf_tgt_br" 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 01:17:53.609 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:17:53.867 Cannot find device "nvmf_tgt_br2" 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:17:53.867 Cannot find device "nvmf_tgt_br" 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:17:53.867 Cannot find device "nvmf_tgt_br2" 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:53.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:53.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:17:53.867 11:14:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:17:53.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:53.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 01:17:53.867 01:17:53.867 --- 10.0.0.2 ping statistics --- 01:17:53.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:53.867 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 01:17:53.867 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:17:54.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:54.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 01:17:54.125 01:17:54.125 --- 10.0.0.3 ping statistics --- 01:17:54.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:54.125 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:54.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:54.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 01:17:54.125 01:17:54.125 --- 10.0.0.1 ping statistics --- 01:17:54.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:54.125 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=104757 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 104757 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 104757 ']' 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:54.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:54.125 11:14:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:54.125 [2024-07-22 11:14:59.159587] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:54.125 [2024-07-22 11:14:59.159682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:54.125 [2024-07-22 11:14:59.303020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:54.383 [2024-07-22 11:14:59.379414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:54.383 [2024-07-22 11:14:59.379486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:54.383 [2024-07-22 11:14:59.379499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:54.383 [2024-07-22 11:14:59.379508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:54.383 [2024-07-22 11:14:59.379515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:54.383 [2024-07-22 11:14:59.379541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 [2024-07-22 11:15:00.216704] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 null0 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d1d5a3e4e702437f82f700ec10c4a737 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 [2024-07-22 11:15:00.256842] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 nvme0n1 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.317 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.317 [ 01:17:55.317 { 01:17:55.317 "aliases": [ 01:17:55.317 "d1d5a3e4-e702-437f-82f7-00ec10c4a737" 01:17:55.317 ], 01:17:55.317 "assigned_rate_limits": { 01:17:55.318 "r_mbytes_per_sec": 0, 01:17:55.318 "rw_ios_per_sec": 0, 01:17:55.318 "rw_mbytes_per_sec": 0, 01:17:55.318 "w_mbytes_per_sec": 0 01:17:55.318 }, 01:17:55.318 "block_size": 512, 01:17:55.318 "claimed": false, 01:17:55.318 "driver_specific": { 01:17:55.318 "mp_policy": "active_passive", 01:17:55.318 "nvme": [ 01:17:55.318 { 01:17:55.318 "ctrlr_data": { 01:17:55.318 "ana_reporting": false, 01:17:55.318 "cntlid": 1, 01:17:55.318 "firmware_revision": "24.09", 01:17:55.318 "model_number": "SPDK bdev Controller", 01:17:55.318 "multi_ctrlr": true, 01:17:55.318 "oacs": { 01:17:55.318 "firmware": 0, 01:17:55.318 "format": 0, 01:17:55.318 "ns_manage": 0, 01:17:55.318 "security": 0 01:17:55.318 }, 01:17:55.318 "serial_number": "00000000000000000000", 01:17:55.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:55.318 "vendor_id": "0x8086" 01:17:55.318 }, 01:17:55.318 "ns_data": { 01:17:55.318 "can_share": true, 01:17:55.318 "id": 1 01:17:55.318 }, 01:17:55.318 "trid": { 01:17:55.318 "adrfam": "IPv4", 01:17:55.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:55.318 "traddr": "10.0.0.2", 01:17:55.318 "trsvcid": "4420", 01:17:55.318 "trtype": "TCP" 01:17:55.318 }, 01:17:55.318 "vs": { 01:17:55.318 "nvme_version": "1.3" 01:17:55.318 } 01:17:55.318 } 01:17:55.318 ] 01:17:55.318 }, 01:17:55.318 "memory_domains": [ 01:17:55.318 { 01:17:55.318 "dma_device_id": "system", 01:17:55.318 "dma_device_type": 1 01:17:55.318 } 01:17:55.318 ], 01:17:55.318 "name": "nvme0n1", 01:17:55.318 "num_blocks": 2097152, 01:17:55.318 "product_name": "NVMe disk", 01:17:55.318 "supported_io_types": { 01:17:55.318 "abort": true, 01:17:55.318 "compare": true, 01:17:55.318 "compare_and_write": true, 01:17:55.318 "copy": true, 01:17:55.318 "flush": true, 01:17:55.318 "get_zone_info": false, 01:17:55.318 "nvme_admin": true, 01:17:55.318 "nvme_io": true, 01:17:55.318 "nvme_io_md": false, 01:17:55.318 "nvme_iov_md": false, 01:17:55.318 "read": true, 01:17:55.318 "reset": true, 01:17:55.318 "seek_data": false, 01:17:55.318 "seek_hole": false, 01:17:55.318 "unmap": false, 01:17:55.318 "write": true, 01:17:55.318 "write_zeroes": true, 01:17:55.318 "zcopy": false, 01:17:55.318 "zone_append": false, 01:17:55.318 "zone_management": false 01:17:55.318 }, 01:17:55.318 "uuid": "d1d5a3e4-e702-437f-82f7-00ec10c4a737", 01:17:55.318 "zoned": false 01:17:55.318 } 01:17:55.318 ] 01:17:55.318 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.318 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 01:17:55.318 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.318 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.576 [2024-07-22 11:15:00.524925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:17:55.576 [2024-07-22 11:15:00.525204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e5650 (9): Bad file descriptor 01:17:55.576 [2024-07-22 11:15:00.657111] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:17:55.576 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.576 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:17:55.576 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.576 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.576 [ 01:17:55.576 { 01:17:55.576 "aliases": [ 01:17:55.576 "d1d5a3e4-e702-437f-82f7-00ec10c4a737" 01:17:55.576 ], 01:17:55.576 "assigned_rate_limits": { 01:17:55.576 "r_mbytes_per_sec": 0, 01:17:55.576 "rw_ios_per_sec": 0, 01:17:55.576 "rw_mbytes_per_sec": 0, 01:17:55.576 "w_mbytes_per_sec": 0 01:17:55.576 }, 01:17:55.576 "block_size": 512, 01:17:55.576 "claimed": false, 01:17:55.576 "driver_specific": { 01:17:55.576 "mp_policy": "active_passive", 01:17:55.576 "nvme": [ 01:17:55.576 { 01:17:55.576 "ctrlr_data": { 01:17:55.576 "ana_reporting": false, 01:17:55.576 "cntlid": 2, 01:17:55.576 "firmware_revision": "24.09", 01:17:55.576 "model_number": "SPDK bdev Controller", 01:17:55.576 "multi_ctrlr": true, 01:17:55.576 "oacs": { 01:17:55.576 "firmware": 0, 01:17:55.576 "format": 0, 01:17:55.576 "ns_manage": 0, 01:17:55.576 "security": 0 01:17:55.576 }, 01:17:55.576 "serial_number": "00000000000000000000", 01:17:55.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:55.576 "vendor_id": "0x8086" 01:17:55.576 }, 01:17:55.576 "ns_data": { 01:17:55.576 "can_share": true, 01:17:55.576 "id": 1 01:17:55.576 }, 01:17:55.576 "trid": { 01:17:55.576 "adrfam": "IPv4", 01:17:55.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:55.576 "traddr": "10.0.0.2", 01:17:55.576 "trsvcid": "4420", 01:17:55.576 "trtype": "TCP" 01:17:55.576 }, 01:17:55.576 "vs": { 01:17:55.576 "nvme_version": "1.3" 01:17:55.576 } 01:17:55.576 } 01:17:55.576 ] 01:17:55.576 }, 01:17:55.576 "memory_domains": [ 01:17:55.576 { 01:17:55.576 "dma_device_id": "system", 01:17:55.576 "dma_device_type": 1 01:17:55.576 } 01:17:55.576 ], 01:17:55.576 "name": "nvme0n1", 01:17:55.576 "num_blocks": 2097152, 01:17:55.576 "product_name": "NVMe disk", 01:17:55.576 "supported_io_types": { 01:17:55.576 "abort": true, 01:17:55.576 "compare": true, 01:17:55.576 "compare_and_write": true, 01:17:55.576 "copy": true, 01:17:55.576 "flush": true, 01:17:55.576 "get_zone_info": false, 01:17:55.576 "nvme_admin": true, 01:17:55.576 "nvme_io": true, 01:17:55.576 "nvme_io_md": false, 01:17:55.576 "nvme_iov_md": false, 01:17:55.576 "read": true, 01:17:55.576 "reset": true, 01:17:55.576 "seek_data": false, 01:17:55.576 "seek_hole": false, 01:17:55.577 "unmap": false, 01:17:55.577 "write": true, 01:17:55.577 "write_zeroes": true, 01:17:55.577 "zcopy": false, 01:17:55.577 "zone_append": false, 01:17:55.577 "zone_management": false 01:17:55.577 }, 01:17:55.577 "uuid": "d1d5a3e4-e702-437f-82f7-00ec10c4a737", 01:17:55.577 "zoned": false 01:17:55.577 } 01:17:55.577 ] 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Rw0tzpzJbJ 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Rw0tzpzJbJ 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.577 [2024-07-22 11:15:00.733049] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:17:55.577 [2024-07-22 11:15:00.733175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Rw0tzpzJbJ 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.577 [2024-07-22 11:15:00.741063] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Rw0tzpzJbJ 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.577 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.577 [2024-07-22 11:15:00.749043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:17:55.577 [2024-07-22 11:15:00.749102] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 01:17:55.835 nvme0n1 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.835 [ 01:17:55.835 { 01:17:55.835 "aliases": [ 01:17:55.835 "d1d5a3e4-e702-437f-82f7-00ec10c4a737" 01:17:55.835 ], 01:17:55.835 "assigned_rate_limits": { 01:17:55.835 "r_mbytes_per_sec": 0, 01:17:55.835 "rw_ios_per_sec": 0, 01:17:55.835 "rw_mbytes_per_sec": 0, 01:17:55.835 "w_mbytes_per_sec": 0 01:17:55.835 }, 01:17:55.835 "block_size": 512, 01:17:55.835 "claimed": false, 01:17:55.835 "driver_specific": { 01:17:55.835 "mp_policy": "active_passive", 01:17:55.835 "nvme": [ 01:17:55.835 { 01:17:55.835 "ctrlr_data": { 01:17:55.835 "ana_reporting": false, 01:17:55.835 "cntlid": 3, 01:17:55.835 "firmware_revision": "24.09", 01:17:55.835 "model_number": "SPDK bdev Controller", 01:17:55.835 "multi_ctrlr": true, 01:17:55.835 "oacs": { 01:17:55.835 "firmware": 0, 01:17:55.835 "format": 0, 01:17:55.835 "ns_manage": 0, 01:17:55.835 "security": 0 01:17:55.835 }, 01:17:55.835 "serial_number": "00000000000000000000", 01:17:55.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:55.835 "vendor_id": "0x8086" 01:17:55.835 }, 01:17:55.835 "ns_data": { 01:17:55.835 "can_share": true, 01:17:55.835 "id": 1 01:17:55.835 }, 01:17:55.835 "trid": { 01:17:55.835 "adrfam": "IPv4", 01:17:55.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:55.835 "traddr": "10.0.0.2", 01:17:55.835 "trsvcid": "4421", 01:17:55.835 "trtype": "TCP" 01:17:55.835 }, 01:17:55.835 "vs": { 01:17:55.835 "nvme_version": "1.3" 01:17:55.835 } 01:17:55.835 } 01:17:55.835 ] 01:17:55.835 }, 01:17:55.835 "memory_domains": [ 01:17:55.835 { 01:17:55.835 "dma_device_id": "system", 01:17:55.835 "dma_device_type": 1 01:17:55.835 } 01:17:55.835 ], 01:17:55.835 "name": "nvme0n1", 01:17:55.835 "num_blocks": 2097152, 01:17:55.835 "product_name": "NVMe disk", 01:17:55.835 "supported_io_types": { 01:17:55.835 "abort": true, 01:17:55.835 "compare": true, 01:17:55.835 "compare_and_write": true, 01:17:55.835 "copy": true, 01:17:55.835 "flush": true, 01:17:55.835 "get_zone_info": false, 01:17:55.835 "nvme_admin": true, 01:17:55.835 "nvme_io": true, 01:17:55.835 "nvme_io_md": false, 01:17:55.835 "nvme_iov_md": false, 01:17:55.835 "read": true, 01:17:55.835 "reset": true, 01:17:55.835 "seek_data": false, 01:17:55.835 "seek_hole": false, 01:17:55.835 "unmap": false, 01:17:55.835 "write": true, 01:17:55.835 "write_zeroes": true, 01:17:55.835 "zcopy": false, 01:17:55.835 "zone_append": false, 01:17:55.835 "zone_management": false 01:17:55.835 }, 01:17:55.835 "uuid": "d1d5a3e4-e702-437f-82f7-00ec10c4a737", 01:17:55.835 "zoned": false 01:17:55.835 } 01:17:55.835 ] 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Rw0tzpzJbJ 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 01:17:55.835 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:17:55.836 rmmod nvme_tcp 01:17:55.836 rmmod nvme_fabrics 01:17:55.836 rmmod nvme_keyring 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 104757 ']' 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 104757 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 104757 ']' 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 104757 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:55.836 11:15:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104757 01:17:55.836 killing process with pid 104757 01:17:55.836 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:17:55.836 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:17:55.836 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104757' 01:17:55.836 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 104757 01:17:55.836 [2024-07-22 11:15:01.010875] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 01:17:55.836 [2024-07-22 11:15:01.010905] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:17:55.836 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 104757 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:17:56.094 01:17:56.094 real 0m2.627s 01:17:56.094 user 0m2.459s 01:17:56.094 sys 0m0.665s 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:56.094 ************************************ 01:17:56.094 11:15:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:17:56.094 END TEST nvmf_async_init 01:17:56.094 ************************************ 01:17:56.352 11:15:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:17:56.352 11:15:01 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:17:56.352 11:15:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:17:56.352 11:15:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:56.352 11:15:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:56.352 ************************************ 01:17:56.352 START TEST dma 01:17:56.352 ************************************ 01:17:56.352 11:15:01 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:17:56.352 * Looking for test storage... 01:17:56.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:17:56.352 11:15:01 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:56.352 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:56.352 11:15:01 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:56.352 11:15:01 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:56.352 11:15:01 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:56.352 11:15:01 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.352 11:15:01 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.353 11:15:01 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.353 11:15:01 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 01:17:56.353 11:15:01 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:17:56.353 11:15:01 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 01:17:56.353 11:15:01 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 01:17:56.353 11:15:01 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 01:17:56.353 01:17:56.353 real 0m0.107s 01:17:56.353 user 0m0.053s 01:17:56.353 sys 0m0.059s 01:17:56.353 11:15:01 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:56.353 11:15:01 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 01:17:56.353 ************************************ 01:17:56.353 END TEST dma 01:17:56.353 ************************************ 01:17:56.353 11:15:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:17:56.353 11:15:01 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:17:56.353 11:15:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:17:56.353 11:15:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:56.353 11:15:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:56.353 ************************************ 01:17:56.353 START TEST nvmf_identify 01:17:56.353 ************************************ 01:17:56.353 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:17:56.610 * Looking for test storage... 01:17:56.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:56.610 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:17:56.611 Cannot find device "nvmf_tgt_br" 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:17:56.611 Cannot find device "nvmf_tgt_br2" 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:17:56.611 Cannot find device "nvmf_tgt_br" 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:17:56.611 Cannot find device "nvmf_tgt_br2" 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:56.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:56.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:56.611 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:17:56.868 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:17:56.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:56.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 01:17:56.869 01:17:56.869 --- 10.0.0.2 ping statistics --- 01:17:56.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:56.869 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:17:56.869 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:56.869 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 01:17:56.869 01:17:56.869 --- 10.0.0.3 ping statistics --- 01:17:56.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:56.869 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:56.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:56.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 01:17:56.869 01:17:56.869 --- 10.0.0.1 ping statistics --- 01:17:56.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:56.869 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:56.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=105024 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 105024 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 105024 ']' 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:56.869 11:15:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:56.869 [2024-07-22 11:15:02.008702] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:56.869 [2024-07-22 11:15:02.008782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:57.126 [2024-07-22 11:15:02.142707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:17:57.126 [2024-07-22 11:15:02.215821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:57.126 [2024-07-22 11:15:02.215891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:57.126 [2024-07-22 11:15:02.215903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:57.127 [2024-07-22 11:15:02.215911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:57.127 [2024-07-22 11:15:02.215918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:57.127 [2024-07-22 11:15:02.216089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:57.127 [2024-07-22 11:15:02.217598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:17:57.127 [2024-07-22 11:15:02.217694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:17:57.127 [2024-07-22 11:15:02.217705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 [2024-07-22 11:15:02.962740] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 01:17:58.061 11:15:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 Malloc0 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 [2024-07-22 11:15:03.075795] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.061 [ 01:17:58.061 { 01:17:58.061 "allow_any_host": true, 01:17:58.061 "hosts": [], 01:17:58.061 "listen_addresses": [ 01:17:58.061 { 01:17:58.061 "adrfam": "IPv4", 01:17:58.061 "traddr": "10.0.0.2", 01:17:58.061 "trsvcid": "4420", 01:17:58.061 "trtype": "TCP" 01:17:58.061 } 01:17:58.061 ], 01:17:58.061 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:17:58.061 "subtype": "Discovery" 01:17:58.061 }, 01:17:58.061 { 01:17:58.061 "allow_any_host": true, 01:17:58.061 "hosts": [], 01:17:58.061 "listen_addresses": [ 01:17:58.061 { 01:17:58.061 "adrfam": "IPv4", 01:17:58.061 "traddr": "10.0.0.2", 01:17:58.061 "trsvcid": "4420", 01:17:58.061 "trtype": "TCP" 01:17:58.061 } 01:17:58.061 ], 01:17:58.061 "max_cntlid": 65519, 01:17:58.061 "max_namespaces": 32, 01:17:58.061 "min_cntlid": 1, 01:17:58.061 "model_number": "SPDK bdev Controller", 01:17:58.061 "namespaces": [ 01:17:58.061 { 01:17:58.061 "bdev_name": "Malloc0", 01:17:58.061 "eui64": "ABCDEF0123456789", 01:17:58.061 "name": "Malloc0", 01:17:58.061 "nguid": "ABCDEF0123456789ABCDEF0123456789", 01:17:58.061 "nsid": 1, 01:17:58.061 "uuid": "2329c190-9d00-45c0-814c-7c31bc1ebdc8" 01:17:58.061 } 01:17:58.061 ], 01:17:58.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:17:58.061 "serial_number": "SPDK00000000000001", 01:17:58.061 "subtype": "NVMe" 01:17:58.061 } 01:17:58.061 ] 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.061 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 01:17:58.061 [2024-07-22 11:15:03.131076] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:58.061 [2024-07-22 11:15:03.131120] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105079 ] 01:17:58.334 [2024-07-22 11:15:03.268218] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 01:17:58.334 [2024-07-22 11:15:03.268311] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:17:58.334 [2024-07-22 11:15:03.268319] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:17:58.334 [2024-07-22 11:15:03.268335] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:17:58.334 [2024-07-22 11:15:03.268344] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:17:58.334 [2024-07-22 11:15:03.268532] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 01:17:58.334 [2024-07-22 11:15:03.268599] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7f76e0 0 01:17:58.334 [2024-07-22 11:15:03.281979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:17:58.334 [2024-07-22 11:15:03.282004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:17:58.334 [2024-07-22 11:15:03.282020] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:17:58.335 [2024-07-22 11:15:03.282023] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:17:58.335 [2024-07-22 11:15:03.282073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.282082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.282086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.282102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:17:58.335 [2024-07-22 11:15:03.282136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.289984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.290005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.290010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.290029] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:17:58.335 [2024-07-22 11:15:03.290037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 01:17:58.335 [2024-07-22 11:15:03.290043] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 01:17:58.335 [2024-07-22 11:15:03.290064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.290083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.335 [2024-07-22 11:15:03.290114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.290199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.290207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.290210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.290220] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 01:17:58.335 [2024-07-22 11:15:03.290228] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 01:17:58.335 [2024-07-22 11:15:03.290248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.290263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.335 [2024-07-22 11:15:03.290290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.290344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.290355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.290359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.290369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 01:17:58.335 [2024-07-22 11:15:03.290377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 01:17:58.335 [2024-07-22 11:15:03.290385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.290399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.335 [2024-07-22 11:15:03.290421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.290486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.290493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.290497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.290506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:17:58.335 [2024-07-22 11:15:03.290517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.290532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.335 [2024-07-22 11:15:03.290552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.290614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.290621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.290625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.290633] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 01:17:58.335 [2024-07-22 11:15:03.290638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 01:17:58.335 [2024-07-22 11:15:03.290646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:17:58.335 [2024-07-22 11:15:03.290752] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 01:17:58.335 [2024-07-22 11:15:03.290765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:17:58.335 [2024-07-22 11:15:03.290778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.290793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.335 [2024-07-22 11:15:03.290816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.290869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.290876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.290880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.290889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:17:58.335 [2024-07-22 11:15:03.290900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.290908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.290915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.335 [2024-07-22 11:15:03.290936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.291022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.291032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.291036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.291040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.291046] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:17:58.335 [2024-07-22 11:15:03.291051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 01:17:58.335 [2024-07-22 11:15:03.291060] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 01:17:58.335 [2024-07-22 11:15:03.291073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 01:17:58.335 [2024-07-22 11:15:03.291084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.291089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.335 [2024-07-22 11:15:03.291096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.335 [2024-07-22 11:15:03.291121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.335 [2024-07-22 11:15:03.291229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.335 [2024-07-22 11:15:03.291237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.335 [2024-07-22 11:15:03.291241] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.291245] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f76e0): datao=0, datal=4096, cccid=0 01:17:58.335 [2024-07-22 11:15:03.291250] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x842ec0) on tqpair(0x7f76e0): expected_datao=0, payload_size=4096 01:17:58.335 [2024-07-22 11:15:03.291255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.291263] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.291268] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.291277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.335 [2024-07-22 11:15:03.291283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.335 [2024-07-22 11:15:03.291287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.335 [2024-07-22 11:15:03.291291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.335 [2024-07-22 11:15:03.291302] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 01:17:58.335 [2024-07-22 11:15:03.291307] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 01:17:58.335 [2024-07-22 11:15:03.291312] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 01:17:58.335 [2024-07-22 11:15:03.291317] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 01:17:58.335 [2024-07-22 11:15:03.291323] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 01:17:58.336 [2024-07-22 11:15:03.291328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 01:17:58.336 [2024-07-22 11:15:03.291337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 01:17:58.336 [2024-07-22 11:15:03.291345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.291362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:17:58.336 [2024-07-22 11:15:03.291399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.336 [2024-07-22 11:15:03.291468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.336 [2024-07-22 11:15:03.291475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.336 [2024-07-22 11:15:03.291479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.336 [2024-07-22 11:15:03.291497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.291512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.336 [2024-07-22 11:15:03.291519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.291531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.336 [2024-07-22 11:15:03.291537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.291550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.336 [2024-07-22 11:15:03.291556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.291569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.336 [2024-07-22 11:15:03.291574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 01:17:58.336 [2024-07-22 11:15:03.291583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:17:58.336 [2024-07-22 11:15:03.291591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.291601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.336 [2024-07-22 11:15:03.291624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x842ec0, cid 0, qid 0 01:17:58.336 [2024-07-22 11:15:03.291645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843040, cid 1, qid 0 01:17:58.336 [2024-07-22 11:15:03.291650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8431c0, cid 2, qid 0 01:17:58.336 [2024-07-22 11:15:03.291654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.336 [2024-07-22 11:15:03.291659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8434c0, cid 4, qid 0 01:17:58.336 [2024-07-22 11:15:03.291761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.336 [2024-07-22 11:15:03.291770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.336 [2024-07-22 11:15:03.291774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8434c0) on tqpair=0x7f76e0 01:17:58.336 [2024-07-22 11:15:03.291789] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 01:17:58.336 [2024-07-22 11:15:03.291796] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 01:17:58.336 [2024-07-22 11:15:03.291808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.291821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.336 [2024-07-22 11:15:03.291843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8434c0, cid 4, qid 0 01:17:58.336 [2024-07-22 11:15:03.291911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.336 [2024-07-22 11:15:03.291918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.336 [2024-07-22 11:15:03.291921] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291925] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f76e0): datao=0, datal=4096, cccid=4 01:17:58.336 [2024-07-22 11:15:03.291929] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8434c0) on tqpair(0x7f76e0): expected_datao=0, payload_size=4096 01:17:58.336 [2024-07-22 11:15:03.291933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291939] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291944] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.336 [2024-07-22 11:15:03.291972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.336 [2024-07-22 11:15:03.291978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.291982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8434c0) on tqpair=0x7f76e0 01:17:58.336 [2024-07-22 11:15:03.292014] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 01:17:58.336 [2024-07-22 11:15:03.292069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.292090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.336 [2024-07-22 11:15:03.292099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.292114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.336 [2024-07-22 11:15:03.292150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8434c0, cid 4, qid 0 01:17:58.336 [2024-07-22 11:15:03.292159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843640, cid 5, qid 0 01:17:58.336 [2024-07-22 11:15:03.292268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.336 [2024-07-22 11:15:03.292276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.336 [2024-07-22 11:15:03.292280] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292284] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f76e0): datao=0, datal=1024, cccid=4 01:17:58.336 [2024-07-22 11:15:03.292288] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8434c0) on tqpair(0x7f76e0): expected_datao=0, payload_size=1024 01:17:58.336 [2024-07-22 11:15:03.292292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292299] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292303] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.336 [2024-07-22 11:15:03.292316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.336 [2024-07-22 11:15:03.292320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.292324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843640) on tqpair=0x7f76e0 01:17:58.336 [2024-07-22 11:15:03.336997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.336 [2024-07-22 11:15:03.337019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.336 [2024-07-22 11:15:03.337036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8434c0) on tqpair=0x7f76e0 01:17:58.336 [2024-07-22 11:15:03.337055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.337069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.336 [2024-07-22 11:15:03.337103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8434c0, cid 4, qid 0 01:17:58.336 [2024-07-22 11:15:03.337184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.336 [2024-07-22 11:15:03.337191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.336 [2024-07-22 11:15:03.337195] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337198] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f76e0): datao=0, datal=3072, cccid=4 01:17:58.336 [2024-07-22 11:15:03.337203] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8434c0) on tqpair(0x7f76e0): expected_datao=0, payload_size=3072 01:17:58.336 [2024-07-22 11:15:03.337207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337214] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337218] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.336 [2024-07-22 11:15:03.337232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.336 [2024-07-22 11:15:03.337235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8434c0) on tqpair=0x7f76e0 01:17:58.336 [2024-07-22 11:15:03.337250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7f76e0) 01:17:58.336 [2024-07-22 11:15:03.337262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.336 [2024-07-22 11:15:03.337290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8434c0, cid 4, qid 0 01:17:58.336 [2024-07-22 11:15:03.337366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.336 [2024-07-22 11:15:03.337374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.336 [2024-07-22 11:15:03.337377] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.336 [2024-07-22 11:15:03.337381] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7f76e0): datao=0, datal=8, cccid=4 01:17:58.336 [2024-07-22 11:15:03.337385] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8434c0) on tqpair(0x7f76e0): expected_datao=0, payload_size=8 01:17:58.337 [2024-07-22 11:15:03.337389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.337 [2024-07-22 11:15:03.337395] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.337 [2024-07-22 11:15:03.337399] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.337 ===================================================== 01:17:58.337 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 01:17:58.337 ===================================================== 01:17:58.337 Controller Capabilities/Features 01:17:58.337 ================================ 01:17:58.337 Vendor ID: 0000 01:17:58.337 Subsystem Vendor ID: 0000 01:17:58.337 Serial Number: .................... 01:17:58.337 Model Number: ........................................ 01:17:58.337 Firmware Version: 24.09 01:17:58.337 Recommended Arb Burst: 0 01:17:58.337 IEEE OUI Identifier: 00 00 00 01:17:58.337 Multi-path I/O 01:17:58.337 May have multiple subsystem ports: No 01:17:58.337 May have multiple controllers: No 01:17:58.337 Associated with SR-IOV VF: No 01:17:58.337 Max Data Transfer Size: 131072 01:17:58.337 Max Number of Namespaces: 0 01:17:58.337 Max Number of I/O Queues: 1024 01:17:58.337 NVMe Specification Version (VS): 1.3 01:17:58.337 NVMe Specification Version (Identify): 1.3 01:17:58.337 Maximum Queue Entries: 128 01:17:58.337 Contiguous Queues Required: Yes 01:17:58.337 Arbitration Mechanisms Supported 01:17:58.337 Weighted Round Robin: Not Supported 01:17:58.337 Vendor Specific: Not Supported 01:17:58.337 Reset Timeout: 15000 ms 01:17:58.337 Doorbell Stride: 4 bytes 01:17:58.337 NVM Subsystem Reset: Not Supported 01:17:58.337 Command Sets Supported 01:17:58.337 NVM Command Set: Supported 01:17:58.337 Boot Partition: Not Supported 01:17:58.337 Memory Page Size Minimum: 4096 bytes 01:17:58.337 Memory Page Size Maximum: 4096 bytes 01:17:58.337 Persistent Memory Region: Not Supported 01:17:58.337 Optional Asynchronous Events Supported 01:17:58.337 Namespace Attribute Notices: Not Supported 01:17:58.337 Firmware Activation Notices: Not Supported 01:17:58.337 ANA Change Notices: Not Supported 01:17:58.337 PLE Aggregate Log Change Notices: Not Supported 01:17:58.337 LBA Status Info Alert Notices: Not Supported 01:17:58.337 EGE Aggregate Log Change Notices: Not Supported 01:17:58.337 Normal NVM Subsystem Shutdown event: Not Supported 01:17:58.337 Zone Descriptor Change Notices: Not Supported 01:17:58.337 Discovery Log Change Notices: Supported 01:17:58.337 Controller Attributes 01:17:58.337 128-bit Host Identifier: Not Supported 01:17:58.337 Non-Operational Permissive Mode: Not Supported 01:17:58.337 NVM Sets: Not Supported 01:17:58.337 Read Recovery Levels: Not Supported 01:17:58.337 Endurance Groups: Not Supported 01:17:58.337 Predictable Latency Mode: Not Supported 01:17:58.337 Traffic Based Keep ALive: Not Supported 01:17:58.337 Namespace Granularity: Not Supported 01:17:58.337 SQ Associations: Not Supported 01:17:58.337 UUID List: Not Supported 01:17:58.337 Multi-Domain Subsystem: Not Supported 01:17:58.337 Fixed Capacity Management: Not Supported 01:17:58.337 Variable Capacity Management: Not Supported 01:17:58.337 Delete Endurance Group: Not Supported 01:17:58.337 Delete NVM Set: Not Supported 01:17:58.337 Extended LBA Formats Supported: Not Supported 01:17:58.337 Flexible Data Placement Supported: Not Supported 01:17:58.337 01:17:58.337 Controller Memory Buffer Support 01:17:58.337 ================================ 01:17:58.337 Supported: No 01:17:58.337 01:17:58.337 Persistent Memory Region Support 01:17:58.337 ================================ 01:17:58.337 Supported: No 01:17:58.337 01:17:58.337 Admin Command Set Attributes 01:17:58.337 ============================ 01:17:58.337 Security Send/Receive: Not Supported 01:17:58.337 Format NVM: Not Supported 01:17:58.337 Firmware Activate/Download: Not Supported 01:17:58.337 Namespace Management: Not Supported 01:17:58.337 Device Self-Test: Not Supported 01:17:58.337 Directives: Not Supported 01:17:58.337 NVMe-MI: Not Supported 01:17:58.337 Virtualization Management: Not Supported 01:17:58.337 Doorbell Buffer Config: Not Supported 01:17:58.337 Get LBA Status Capability: Not Supported 01:17:58.337 Command & Feature Lockdown Capability: Not Supported 01:17:58.337 Abort Command Limit: 1 01:17:58.337 Async Event Request Limit: 4 01:17:58.337 Number of Firmware Slots: N/A 01:17:58.337 Firmware Slot 1 Read-Only: N/A 01:17:58.337 Firm[2024-07-22 11:15:03.379027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.337 [2024-07-22 11:15:03.379049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.337 [2024-07-22 11:15:03.379055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.337 [2024-07-22 11:15:03.379059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8434c0) on tqpair=0x7f76e0 01:17:58.337 ware Activation Without Reset: N/A 01:17:58.337 Multiple Update Detection Support: N/A 01:17:58.337 Firmware Update Granularity: No Information Provided 01:17:58.337 Per-Namespace SMART Log: No 01:17:58.337 Asymmetric Namespace Access Log Page: Not Supported 01:17:58.337 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:17:58.337 Command Effects Log Page: Not Supported 01:17:58.337 Get Log Page Extended Data: Supported 01:17:58.337 Telemetry Log Pages: Not Supported 01:17:58.337 Persistent Event Log Pages: Not Supported 01:17:58.337 Supported Log Pages Log Page: May Support 01:17:58.337 Commands Supported & Effects Log Page: Not Supported 01:17:58.337 Feature Identifiers & Effects Log Page:May Support 01:17:58.337 NVMe-MI Commands & Effects Log Page: May Support 01:17:58.337 Data Area 4 for Telemetry Log: Not Supported 01:17:58.337 Error Log Page Entries Supported: 128 01:17:58.337 Keep Alive: Not Supported 01:17:58.337 01:17:58.337 NVM Command Set Attributes 01:17:58.337 ========================== 01:17:58.337 Submission Queue Entry Size 01:17:58.337 Max: 1 01:17:58.337 Min: 1 01:17:58.337 Completion Queue Entry Size 01:17:58.337 Max: 1 01:17:58.337 Min: 1 01:17:58.337 Number of Namespaces: 0 01:17:58.337 Compare Command: Not Supported 01:17:58.337 Write Uncorrectable Command: Not Supported 01:17:58.337 Dataset Management Command: Not Supported 01:17:58.337 Write Zeroes Command: Not Supported 01:17:58.337 Set Features Save Field: Not Supported 01:17:58.337 Reservations: Not Supported 01:17:58.337 Timestamp: Not Supported 01:17:58.337 Copy: Not Supported 01:17:58.337 Volatile Write Cache: Not Present 01:17:58.337 Atomic Write Unit (Normal): 1 01:17:58.337 Atomic Write Unit (PFail): 1 01:17:58.337 Atomic Compare & Write Unit: 1 01:17:58.337 Fused Compare & Write: Supported 01:17:58.337 Scatter-Gather List 01:17:58.337 SGL Command Set: Supported 01:17:58.337 SGL Keyed: Supported 01:17:58.337 SGL Bit Bucket Descriptor: Not Supported 01:17:58.337 SGL Metadata Pointer: Not Supported 01:17:58.337 Oversized SGL: Not Supported 01:17:58.337 SGL Metadata Address: Not Supported 01:17:58.337 SGL Offset: Supported 01:17:58.337 Transport SGL Data Block: Not Supported 01:17:58.337 Replay Protected Memory Block: Not Supported 01:17:58.337 01:17:58.337 Firmware Slot Information 01:17:58.337 ========================= 01:17:58.337 Active slot: 0 01:17:58.337 01:17:58.337 01:17:58.337 Error Log 01:17:58.337 ========= 01:17:58.337 01:17:58.337 Active Namespaces 01:17:58.337 ================= 01:17:58.337 Discovery Log Page 01:17:58.337 ================== 01:17:58.337 Generation Counter: 2 01:17:58.337 Number of Records: 2 01:17:58.337 Record Format: 0 01:17:58.337 01:17:58.337 Discovery Log Entry 0 01:17:58.337 ---------------------- 01:17:58.337 Transport Type: 3 (TCP) 01:17:58.337 Address Family: 1 (IPv4) 01:17:58.337 Subsystem Type: 3 (Current Discovery Subsystem) 01:17:58.337 Entry Flags: 01:17:58.337 Duplicate Returned Information: 1 01:17:58.337 Explicit Persistent Connection Support for Discovery: 1 01:17:58.337 Transport Requirements: 01:17:58.337 Secure Channel: Not Required 01:17:58.337 Port ID: 0 (0x0000) 01:17:58.337 Controller ID: 65535 (0xffff) 01:17:58.337 Admin Max SQ Size: 128 01:17:58.337 Transport Service Identifier: 4420 01:17:58.337 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:17:58.337 Transport Address: 10.0.0.2 01:17:58.337 Discovery Log Entry 1 01:17:58.337 ---------------------- 01:17:58.337 Transport Type: 3 (TCP) 01:17:58.337 Address Family: 1 (IPv4) 01:17:58.337 Subsystem Type: 2 (NVM Subsystem) 01:17:58.337 Entry Flags: 01:17:58.337 Duplicate Returned Information: 0 01:17:58.337 Explicit Persistent Connection Support for Discovery: 0 01:17:58.337 Transport Requirements: 01:17:58.337 Secure Channel: Not Required 01:17:58.337 Port ID: 0 (0x0000) 01:17:58.337 Controller ID: 65535 (0xffff) 01:17:58.337 Admin Max SQ Size: 128 01:17:58.337 Transport Service Identifier: 4420 01:17:58.337 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 01:17:58.337 Transport Address: 10.0.0.2 [2024-07-22 11:15:03.379183] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 01:17:58.337 [2024-07-22 11:15:03.379203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x842ec0) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.338 [2024-07-22 11:15:03.379217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843040) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.338 [2024-07-22 11:15:03.379226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8431c0) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.338 [2024-07-22 11:15:03.379234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.338 [2024-07-22 11:15:03.379249] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.379264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.379294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.379367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.379375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.379378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.379405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.379431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.379509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.379516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.379520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379534] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 01:17:58.338 [2024-07-22 11:15:03.379540] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 01:17:58.338 [2024-07-22 11:15:03.379550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.379565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.379587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.379661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.379670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.379674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.379704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.379726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.379787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.379794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.379798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.379827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.379847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.379907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.379913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.379917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.379931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.379939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.379946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.379979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.380037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.380045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.380048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.380063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.380078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.380100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.380154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.380160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.380164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.380178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.380194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.380214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.380269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.380275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.380279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.380293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.380308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.380329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.380388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.380395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.380399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.380414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.380429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.380449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.380506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.380513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.380516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.380530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.380545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.338 [2024-07-22 11:15:03.380565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.338 [2024-07-22 11:15:03.380620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.338 [2024-07-22 11:15:03.380627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.338 [2024-07-22 11:15:03.380630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.338 [2024-07-22 11:15:03.380644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.338 [2024-07-22 11:15:03.380652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.338 [2024-07-22 11:15:03.380659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.380679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.380734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.380741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.380745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.380748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.380759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.380764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.380767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.380774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.380794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.380847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.380854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.380858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.380861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.380872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.380877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.380880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.380887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.380908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.380973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.380982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.380986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.380990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381365] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.339 [2024-07-22 11:15:03.381820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.339 [2024-07-22 11:15:03.381828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.339 [2024-07-22 11:15:03.381835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.339 [2024-07-22 11:15:03.381855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.339 [2024-07-22 11:15:03.381916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.339 [2024-07-22 11:15:03.381923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.339 [2024-07-22 11:15:03.381927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.340 [2024-07-22 11:15:03.381930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.340 [2024-07-22 11:15:03.381941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.340 [2024-07-22 11:15:03.381945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.340 [2024-07-22 11:15:03.381949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7f76e0) 01:17:58.340 [2024-07-22 11:15:03.381955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.340 [2024-07-22 11:15:03.386007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843340, cid 3, qid 0 01:17:58.340 [2024-07-22 11:15:03.386064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.340 [2024-07-22 11:15:03.386072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.340 [2024-07-22 11:15:03.386075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.340 [2024-07-22 11:15:03.386079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843340) on tqpair=0x7f76e0 01:17:58.340 [2024-07-22 11:15:03.386089] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 01:17:58.340 01:17:58.340 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 01:17:58.340 [2024-07-22 11:15:03.424002] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:58.340 [2024-07-22 11:15:03.424060] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105081 ] 01:17:58.602 [2024-07-22 11:15:03.563518] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 01:17:58.602 [2024-07-22 11:15:03.563582] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:17:58.602 [2024-07-22 11:15:03.563590] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:17:58.602 [2024-07-22 11:15:03.563603] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:17:58.602 [2024-07-22 11:15:03.563615] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:17:58.602 [2024-07-22 11:15:03.563774] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 01:17:58.602 [2024-07-22 11:15:03.563823] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x113b6e0 0 01:17:58.602 [2024-07-22 11:15:03.578984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:17:58.602 [2024-07-22 11:15:03.579009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:17:58.602 [2024-07-22 11:15:03.579014] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:17:58.602 [2024-07-22 11:15:03.579017] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:17:58.602 [2024-07-22 11:15:03.579060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.579068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.579071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.602 [2024-07-22 11:15:03.579082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:17:58.602 [2024-07-22 11:15:03.579145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.602 [2024-07-22 11:15:03.587094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.602 [2024-07-22 11:15:03.587115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.602 [2024-07-22 11:15:03.587120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.602 [2024-07-22 11:15:03.587135] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:17:58.602 [2024-07-22 11:15:03.587143] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 01:17:58.602 [2024-07-22 11:15:03.587150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 01:17:58.602 [2024-07-22 11:15:03.587168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.602 [2024-07-22 11:15:03.587187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.602 [2024-07-22 11:15:03.587229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.602 [2024-07-22 11:15:03.587422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.602 [2024-07-22 11:15:03.587437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.602 [2024-07-22 11:15:03.587442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.602 [2024-07-22 11:15:03.587451] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 01:17:58.602 [2024-07-22 11:15:03.587460] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 01:17:58.602 [2024-07-22 11:15:03.587469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.602 [2024-07-22 11:15:03.587484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.602 [2024-07-22 11:15:03.587506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.602 [2024-07-22 11:15:03.587849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.602 [2024-07-22 11:15:03.587864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.602 [2024-07-22 11:15:03.587868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.602 [2024-07-22 11:15:03.587879] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 01:17:58.602 [2024-07-22 11:15:03.587888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 01:17:58.602 [2024-07-22 11:15:03.587897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.587905] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.602 [2024-07-22 11:15:03.587912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.602 [2024-07-22 11:15:03.587938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.602 [2024-07-22 11:15:03.588210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.602 [2024-07-22 11:15:03.588225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.602 [2024-07-22 11:15:03.588230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.588234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.602 [2024-07-22 11:15:03.588240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:17:58.602 [2024-07-22 11:15:03.588252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.602 [2024-07-22 11:15:03.588257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.588261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.588268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.603 [2024-07-22 11:15:03.588321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.603 [2024-07-22 11:15:03.588569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.603 [2024-07-22 11:15:03.588582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.603 [2024-07-22 11:15:03.588587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.588590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.603 [2024-07-22 11:15:03.588595] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 01:17:58.603 [2024-07-22 11:15:03.588600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 01:17:58.603 [2024-07-22 11:15:03.588608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:17:58.603 [2024-07-22 11:15:03.588714] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 01:17:58.603 [2024-07-22 11:15:03.588718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:17:58.603 [2024-07-22 11:15:03.588727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.588731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.588734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.588741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.603 [2024-07-22 11:15:03.588767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.603 [2024-07-22 11:15:03.589158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.603 [2024-07-22 11:15:03.589174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.603 [2024-07-22 11:15:03.589179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.589183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.603 [2024-07-22 11:15:03.589188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:17:58.603 [2024-07-22 11:15:03.589200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.589205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.589209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.589216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.603 [2024-07-22 11:15:03.589243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.603 [2024-07-22 11:15:03.589534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.603 [2024-07-22 11:15:03.589547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.603 [2024-07-22 11:15:03.589551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.589555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.603 [2024-07-22 11:15:03.589560] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:17:58.603 [2024-07-22 11:15:03.589565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.589574] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 01:17:58.603 [2024-07-22 11:15:03.589585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.589595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.589599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.589607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.603 [2024-07-22 11:15:03.589632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.603 [2024-07-22 11:15:03.589939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.603 [2024-07-22 11:15:03.589952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.603 [2024-07-22 11:15:03.590002] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590008] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=4096, cccid=0 01:17:58.603 [2024-07-22 11:15:03.590013] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1186ec0) on tqpair(0x113b6e0): expected_datao=0, payload_size=4096 01:17:58.603 [2024-07-22 11:15:03.590019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590026] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590030] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.603 [2024-07-22 11:15:03.590412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.603 [2024-07-22 11:15:03.590416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.603 [2024-07-22 11:15:03.590428] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 01:17:58.603 [2024-07-22 11:15:03.590433] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 01:17:58.603 [2024-07-22 11:15:03.590437] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 01:17:58.603 [2024-07-22 11:15:03.590441] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 01:17:58.603 [2024-07-22 11:15:03.590446] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 01:17:58.603 [2024-07-22 11:15:03.590450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.590458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.590466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.590481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:17:58.603 [2024-07-22 11:15:03.590518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.603 [2024-07-22 11:15:03.590772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.603 [2024-07-22 11:15:03.590786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.603 [2024-07-22 11:15:03.590791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.603 [2024-07-22 11:15:03.590807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.590822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.603 [2024-07-22 11:15:03.590829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.590841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.603 [2024-07-22 11:15:03.590846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.590859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.603 [2024-07-22 11:15:03.590864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.590876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.603 [2024-07-22 11:15:03.590880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.590889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.590896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.590900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x113b6e0) 01:17:58.603 [2024-07-22 11:15:03.590906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.603 [2024-07-22 11:15:03.590933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1186ec0, cid 0, qid 0 01:17:58.603 [2024-07-22 11:15:03.590941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187040, cid 1, qid 0 01:17:58.603 [2024-07-22 11:15:03.590946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11871c0, cid 2, qid 0 01:17:58.603 [2024-07-22 11:15:03.590951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.603 [2024-07-22 11:15:03.590955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11874c0, cid 4, qid 0 01:17:58.603 [2024-07-22 11:15:03.591621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.603 [2024-07-22 11:15:03.591663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.603 [2024-07-22 11:15:03.591669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.591674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11874c0) on tqpair=0x113b6e0 01:17:58.603 [2024-07-22 11:15:03.591685] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 01:17:58.603 [2024-07-22 11:15:03.591691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.591701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.591709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 01:17:58.603 [2024-07-22 11:15:03.591716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.591721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.603 [2024-07-22 11:15:03.591724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.591732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:17:58.604 [2024-07-22 11:15:03.591759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11874c0, cid 4, qid 0 01:17:58.604 [2024-07-22 11:15:03.591840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.591847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.591851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.591855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11874c0) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.591914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.591927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.591937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.591941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.591949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.604 [2024-07-22 11:15:03.592007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11874c0, cid 4, qid 0 01:17:58.604 [2024-07-22 11:15:03.592102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.604 [2024-07-22 11:15:03.592110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.604 [2024-07-22 11:15:03.592114] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592118] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=4096, cccid=4 01:17:58.604 [2024-07-22 11:15:03.592123] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11874c0) on tqpair(0x113b6e0): expected_datao=0, payload_size=4096 01:17:58.604 [2024-07-22 11:15:03.592127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592134] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592138] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.592154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.592157] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11874c0) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.592175] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 01:17:58.604 [2024-07-22 11:15:03.592189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.592222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.604 [2024-07-22 11:15:03.592247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11874c0, cid 4, qid 0 01:17:58.604 [2024-07-22 11:15:03.592382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.604 [2024-07-22 11:15:03.592389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.604 [2024-07-22 11:15:03.592393] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592396] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=4096, cccid=4 01:17:58.604 [2024-07-22 11:15:03.592400] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11874c0) on tqpair(0x113b6e0): expected_datao=0, payload_size=4096 01:17:58.604 [2024-07-22 11:15:03.592404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592411] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592414] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.592429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.592432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11874c0) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.592452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.592484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.604 [2024-07-22 11:15:03.592508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11874c0, cid 4, qid 0 01:17:58.604 [2024-07-22 11:15:03.592698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.604 [2024-07-22 11:15:03.592706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.604 [2024-07-22 11:15:03.592709] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592712] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=4096, cccid=4 01:17:58.604 [2024-07-22 11:15:03.592716] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11874c0) on tqpair(0x113b6e0): expected_datao=0, payload_size=4096 01:17:58.604 [2024-07-22 11:15:03.592721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592727] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592731] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.592745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.592748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11874c0) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.592761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592804] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 01:17:58.604 [2024-07-22 11:15:03.592808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 01:17:58.604 [2024-07-22 11:15:03.592813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 01:17:58.604 [2024-07-22 11:15:03.592828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.592840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.604 [2024-07-22 11:15:03.592848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.592855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.592860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:17:58.604 [2024-07-22 11:15:03.592890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11874c0, cid 4, qid 0 01:17:58.604 [2024-07-22 11:15:03.592899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187640, cid 5, qid 0 01:17:58.604 [2024-07-22 11:15:03.597021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.597039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.597044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.597049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11874c0) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.597056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.597061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.597064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.597068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187640) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.597080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.597085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.597093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.604 [2024-07-22 11:15:03.597123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187640, cid 5, qid 0 01:17:58.604 [2024-07-22 11:15:03.597350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.597377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.597381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.597385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187640) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.597397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.597402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x113b6e0) 01:17:58.604 [2024-07-22 11:15:03.597409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.604 [2024-07-22 11:15:03.597434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187640, cid 5, qid 0 01:17:58.604 [2024-07-22 11:15:03.597708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.604 [2024-07-22 11:15:03.597723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.604 [2024-07-22 11:15:03.597728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.597731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187640) on tqpair=0x113b6e0 01:17:58.604 [2024-07-22 11:15:03.597743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.604 [2024-07-22 11:15:03.597748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x113b6e0) 01:17:58.605 [2024-07-22 11:15:03.597755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.605 [2024-07-22 11:15:03.597778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187640, cid 5, qid 0 01:17:58.605 [2024-07-22 11:15:03.598075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.605 [2024-07-22 11:15:03.598090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.605 [2024-07-22 11:15:03.598094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187640) on tqpair=0x113b6e0 01:17:58.605 [2024-07-22 11:15:03.598120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x113b6e0) 01:17:58.605 [2024-07-22 11:15:03.598134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.605 [2024-07-22 11:15:03.598141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598145] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x113b6e0) 01:17:58.605 [2024-07-22 11:15:03.598151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.605 [2024-07-22 11:15:03.598158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x113b6e0) 01:17:58.605 [2024-07-22 11:15:03.598167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.605 [2024-07-22 11:15:03.598174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x113b6e0) 01:17:58.605 [2024-07-22 11:15:03.598183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.605 [2024-07-22 11:15:03.598210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187640, cid 5, qid 0 01:17:58.605 [2024-07-22 11:15:03.598219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11874c0, cid 4, qid 0 01:17:58.605 [2024-07-22 11:15:03.598223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11877c0, cid 6, qid 0 01:17:58.605 [2024-07-22 11:15:03.598227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187940, cid 7, qid 0 01:17:58.605 [2024-07-22 11:15:03.598731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.605 [2024-07-22 11:15:03.598746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.605 [2024-07-22 11:15:03.598750] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598763] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=8192, cccid=5 01:17:58.605 [2024-07-22 11:15:03.598768] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1187640) on tqpair(0x113b6e0): expected_datao=0, payload_size=8192 01:17:58.605 [2024-07-22 11:15:03.598772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598791] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598796] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.605 [2024-07-22 11:15:03.598807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.605 [2024-07-22 11:15:03.598811] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598814] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=512, cccid=4 01:17:58.605 [2024-07-22 11:15:03.598818] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11874c0) on tqpair(0x113b6e0): expected_datao=0, payload_size=512 01:17:58.605 [2024-07-22 11:15:03.598822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598831] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.605 [2024-07-22 11:15:03.598841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.605 [2024-07-22 11:15:03.598845] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=512, cccid=6 01:17:58.605 [2024-07-22 11:15:03.598852] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11877c0) on tqpair(0x113b6e0): expected_datao=0, payload_size=512 01:17:58.605 [2024-07-22 11:15:03.598856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598861] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598864] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:17:58.605 [2024-07-22 11:15:03.598874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:17:58.605 [2024-07-22 11:15:03.598877] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598880] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x113b6e0): datao=0, datal=4096, cccid=7 01:17:58.605 [2024-07-22 11:15:03.598884] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1187940) on tqpair(0x113b6e0): expected_datao=0, payload_size=4096 01:17:58.605 [2024-07-22 11:15:03.598888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598893] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598897] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.605 [2024-07-22 11:15:03.598911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.605 [2024-07-22 11:15:03.598914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187640) on tqpair=0x113b6e0 01:17:58.605 [2024-07-22 11:15:03.598934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.605 [2024-07-22 11:15:03.598942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.605 [2024-07-22 11:15:03.598945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.598948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11874c0) on tqpair=0x113b6e0 01:17:58.605 [2024-07-22 11:15:03.598987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.605 [2024-07-22 11:15:03.598997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.605 [2024-07-22 11:15:03.599000] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.599004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11877c0) on tqpair=0x113b6e0 01:17:58.605 [2024-07-22 11:15:03.599011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.605 [2024-07-22 11:15:03.599017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.605 [2024-07-22 11:15:03.599020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.605 [2024-07-22 11:15:03.599023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187940) on tqpair=0x113b6e0 01:17:58.605 ===================================================== 01:17:58.605 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:17:58.605 ===================================================== 01:17:58.605 Controller Capabilities/Features 01:17:58.605 ================================ 01:17:58.605 Vendor ID: 8086 01:17:58.605 Subsystem Vendor ID: 8086 01:17:58.605 Serial Number: SPDK00000000000001 01:17:58.605 Model Number: SPDK bdev Controller 01:17:58.605 Firmware Version: 24.09 01:17:58.605 Recommended Arb Burst: 6 01:17:58.605 IEEE OUI Identifier: e4 d2 5c 01:17:58.605 Multi-path I/O 01:17:58.605 May have multiple subsystem ports: Yes 01:17:58.605 May have multiple controllers: Yes 01:17:58.605 Associated with SR-IOV VF: No 01:17:58.605 Max Data Transfer Size: 131072 01:17:58.605 Max Number of Namespaces: 32 01:17:58.605 Max Number of I/O Queues: 127 01:17:58.605 NVMe Specification Version (VS): 1.3 01:17:58.605 NVMe Specification Version (Identify): 1.3 01:17:58.605 Maximum Queue Entries: 128 01:17:58.605 Contiguous Queues Required: Yes 01:17:58.605 Arbitration Mechanisms Supported 01:17:58.605 Weighted Round Robin: Not Supported 01:17:58.605 Vendor Specific: Not Supported 01:17:58.605 Reset Timeout: 15000 ms 01:17:58.605 Doorbell Stride: 4 bytes 01:17:58.605 NVM Subsystem Reset: Not Supported 01:17:58.605 Command Sets Supported 01:17:58.605 NVM Command Set: Supported 01:17:58.605 Boot Partition: Not Supported 01:17:58.605 Memory Page Size Minimum: 4096 bytes 01:17:58.605 Memory Page Size Maximum: 4096 bytes 01:17:58.605 Persistent Memory Region: Not Supported 01:17:58.605 Optional Asynchronous Events Supported 01:17:58.605 Namespace Attribute Notices: Supported 01:17:58.605 Firmware Activation Notices: Not Supported 01:17:58.605 ANA Change Notices: Not Supported 01:17:58.605 PLE Aggregate Log Change Notices: Not Supported 01:17:58.605 LBA Status Info Alert Notices: Not Supported 01:17:58.605 EGE Aggregate Log Change Notices: Not Supported 01:17:58.605 Normal NVM Subsystem Shutdown event: Not Supported 01:17:58.605 Zone Descriptor Change Notices: Not Supported 01:17:58.605 Discovery Log Change Notices: Not Supported 01:17:58.605 Controller Attributes 01:17:58.605 128-bit Host Identifier: Supported 01:17:58.605 Non-Operational Permissive Mode: Not Supported 01:17:58.605 NVM Sets: Not Supported 01:17:58.605 Read Recovery Levels: Not Supported 01:17:58.605 Endurance Groups: Not Supported 01:17:58.605 Predictable Latency Mode: Not Supported 01:17:58.605 Traffic Based Keep ALive: Not Supported 01:17:58.605 Namespace Granularity: Not Supported 01:17:58.605 SQ Associations: Not Supported 01:17:58.605 UUID List: Not Supported 01:17:58.605 Multi-Domain Subsystem: Not Supported 01:17:58.605 Fixed Capacity Management: Not Supported 01:17:58.605 Variable Capacity Management: Not Supported 01:17:58.605 Delete Endurance Group: Not Supported 01:17:58.605 Delete NVM Set: Not Supported 01:17:58.605 Extended LBA Formats Supported: Not Supported 01:17:58.605 Flexible Data Placement Supported: Not Supported 01:17:58.605 01:17:58.605 Controller Memory Buffer Support 01:17:58.605 ================================ 01:17:58.605 Supported: No 01:17:58.605 01:17:58.605 Persistent Memory Region Support 01:17:58.605 ================================ 01:17:58.605 Supported: No 01:17:58.605 01:17:58.605 Admin Command Set Attributes 01:17:58.605 ============================ 01:17:58.605 Security Send/Receive: Not Supported 01:17:58.606 Format NVM: Not Supported 01:17:58.606 Firmware Activate/Download: Not Supported 01:17:58.606 Namespace Management: Not Supported 01:17:58.606 Device Self-Test: Not Supported 01:17:58.606 Directives: Not Supported 01:17:58.606 NVMe-MI: Not Supported 01:17:58.606 Virtualization Management: Not Supported 01:17:58.606 Doorbell Buffer Config: Not Supported 01:17:58.606 Get LBA Status Capability: Not Supported 01:17:58.606 Command & Feature Lockdown Capability: Not Supported 01:17:58.606 Abort Command Limit: 4 01:17:58.606 Async Event Request Limit: 4 01:17:58.606 Number of Firmware Slots: N/A 01:17:58.606 Firmware Slot 1 Read-Only: N/A 01:17:58.606 Firmware Activation Without Reset: N/A 01:17:58.606 Multiple Update Detection Support: N/A 01:17:58.606 Firmware Update Granularity: No Information Provided 01:17:58.606 Per-Namespace SMART Log: No 01:17:58.606 Asymmetric Namespace Access Log Page: Not Supported 01:17:58.606 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 01:17:58.606 Command Effects Log Page: Supported 01:17:58.606 Get Log Page Extended Data: Supported 01:17:58.606 Telemetry Log Pages: Not Supported 01:17:58.606 Persistent Event Log Pages: Not Supported 01:17:58.606 Supported Log Pages Log Page: May Support 01:17:58.606 Commands Supported & Effects Log Page: Not Supported 01:17:58.606 Feature Identifiers & Effects Log Page:May Support 01:17:58.606 NVMe-MI Commands & Effects Log Page: May Support 01:17:58.606 Data Area 4 for Telemetry Log: Not Supported 01:17:58.606 Error Log Page Entries Supported: 128 01:17:58.606 Keep Alive: Supported 01:17:58.606 Keep Alive Granularity: 10000 ms 01:17:58.606 01:17:58.606 NVM Command Set Attributes 01:17:58.606 ========================== 01:17:58.606 Submission Queue Entry Size 01:17:58.606 Max: 64 01:17:58.606 Min: 64 01:17:58.606 Completion Queue Entry Size 01:17:58.606 Max: 16 01:17:58.606 Min: 16 01:17:58.606 Number of Namespaces: 32 01:17:58.606 Compare Command: Supported 01:17:58.606 Write Uncorrectable Command: Not Supported 01:17:58.606 Dataset Management Command: Supported 01:17:58.606 Write Zeroes Command: Supported 01:17:58.606 Set Features Save Field: Not Supported 01:17:58.606 Reservations: Supported 01:17:58.606 Timestamp: Not Supported 01:17:58.606 Copy: Supported 01:17:58.606 Volatile Write Cache: Present 01:17:58.606 Atomic Write Unit (Normal): 1 01:17:58.606 Atomic Write Unit (PFail): 1 01:17:58.606 Atomic Compare & Write Unit: 1 01:17:58.606 Fused Compare & Write: Supported 01:17:58.606 Scatter-Gather List 01:17:58.606 SGL Command Set: Supported 01:17:58.606 SGL Keyed: Supported 01:17:58.606 SGL Bit Bucket Descriptor: Not Supported 01:17:58.606 SGL Metadata Pointer: Not Supported 01:17:58.606 Oversized SGL: Not Supported 01:17:58.606 SGL Metadata Address: Not Supported 01:17:58.606 SGL Offset: Supported 01:17:58.606 Transport SGL Data Block: Not Supported 01:17:58.606 Replay Protected Memory Block: Not Supported 01:17:58.606 01:17:58.606 Firmware Slot Information 01:17:58.606 ========================= 01:17:58.606 Active slot: 1 01:17:58.606 Slot 1 Firmware Revision: 24.09 01:17:58.606 01:17:58.606 01:17:58.606 Commands Supported and Effects 01:17:58.606 ============================== 01:17:58.606 Admin Commands 01:17:58.606 -------------- 01:17:58.606 Get Log Page (02h): Supported 01:17:58.606 Identify (06h): Supported 01:17:58.606 Abort (08h): Supported 01:17:58.606 Set Features (09h): Supported 01:17:58.606 Get Features (0Ah): Supported 01:17:58.606 Asynchronous Event Request (0Ch): Supported 01:17:58.606 Keep Alive (18h): Supported 01:17:58.606 I/O Commands 01:17:58.606 ------------ 01:17:58.606 Flush (00h): Supported LBA-Change 01:17:58.606 Write (01h): Supported LBA-Change 01:17:58.606 Read (02h): Supported 01:17:58.606 Compare (05h): Supported 01:17:58.606 Write Zeroes (08h): Supported LBA-Change 01:17:58.606 Dataset Management (09h): Supported LBA-Change 01:17:58.606 Copy (19h): Supported LBA-Change 01:17:58.606 01:17:58.606 Error Log 01:17:58.606 ========= 01:17:58.606 01:17:58.606 Arbitration 01:17:58.606 =========== 01:17:58.606 Arbitration Burst: 1 01:17:58.606 01:17:58.606 Power Management 01:17:58.606 ================ 01:17:58.606 Number of Power States: 1 01:17:58.606 Current Power State: Power State #0 01:17:58.606 Power State #0: 01:17:58.606 Max Power: 0.00 W 01:17:58.606 Non-Operational State: Operational 01:17:58.606 Entry Latency: Not Reported 01:17:58.606 Exit Latency: Not Reported 01:17:58.606 Relative Read Throughput: 0 01:17:58.606 Relative Read Latency: 0 01:17:58.606 Relative Write Throughput: 0 01:17:58.606 Relative Write Latency: 0 01:17:58.606 Idle Power: Not Reported 01:17:58.606 Active Power: Not Reported 01:17:58.606 Non-Operational Permissive Mode: Not Supported 01:17:58.606 01:17:58.606 Health Information 01:17:58.606 ================== 01:17:58.606 Critical Warnings: 01:17:58.606 Available Spare Space: OK 01:17:58.606 Temperature: OK 01:17:58.606 Device Reliability: OK 01:17:58.606 Read Only: No 01:17:58.606 Volatile Memory Backup: OK 01:17:58.606 Current Temperature: 0 Kelvin (-273 Celsius) 01:17:58.606 Temperature Threshold: 0 Kelvin (-273 Celsius) 01:17:58.606 Available Spare: 0% 01:17:58.606 Available Spare Threshold: 0% 01:17:58.606 Life Percentage Used:[2024-07-22 11:15:03.599129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.606 [2024-07-22 11:15:03.599137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x113b6e0) 01:17:58.606 [2024-07-22 11:15:03.599144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.606 [2024-07-22 11:15:03.599173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187940, cid 7, qid 0 01:17:58.606 [2024-07-22 11:15:03.599624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.606 [2024-07-22 11:15:03.599662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.606 [2024-07-22 11:15:03.599666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.606 [2024-07-22 11:15:03.599669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187940) on tqpair=0x113b6e0 01:17:58.606 [2024-07-22 11:15:03.599733] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 01:17:58.606 [2024-07-22 11:15:03.599750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1186ec0) on tqpair=0x113b6e0 01:17:58.606 [2024-07-22 11:15:03.599757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.606 [2024-07-22 11:15:03.599763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187040) on tqpair=0x113b6e0 01:17:58.606 [2024-07-22 11:15:03.599767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.606 [2024-07-22 11:15:03.599772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11871c0) on tqpair=0x113b6e0 01:17:58.606 [2024-07-22 11:15:03.599776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.606 [2024-07-22 11:15:03.599781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.606 [2024-07-22 11:15:03.599785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:58.607 [2024-07-22 11:15:03.599793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.599798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.599801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.599808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.599839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.599939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.599946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.599950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.599954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.599988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.599994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.599998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.600005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.600046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.600132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.600139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.600143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.600151] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 01:17:58.607 [2024-07-22 11:15:03.600155] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 01:17:58.607 [2024-07-22 11:15:03.600165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.600180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.600203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.600489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.600506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.600511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.600527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.600542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.600567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.600799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.600813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.600818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.600833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.600842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.600849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.600873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.601110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.601125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.601130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.601145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.601161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.601186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.601468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.601482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.601487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.601502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.601517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.601541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.601776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.601790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.601794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.601809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.601818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.601825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.601848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.602033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.602046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.602051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.602066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.602082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.602107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.602401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.602415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.602420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.602435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.602450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.602475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.602543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.602550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.602553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.602567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.607 [2024-07-22 11:15:03.602582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.607 [2024-07-22 11:15:03.602605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.607 [2024-07-22 11:15:03.602800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.607 [2024-07-22 11:15:03.602813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.607 [2024-07-22 11:15:03.602818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.607 [2024-07-22 11:15:03.602822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.607 [2024-07-22 11:15:03.602833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.608 [2024-07-22 11:15:03.602838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.608 [2024-07-22 11:15:03.602842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.608 [2024-07-22 11:15:03.602849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.608 [2024-07-22 11:15:03.602872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.608 [2024-07-22 11:15:03.607017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.608 [2024-07-22 11:15:03.607036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.608 [2024-07-22 11:15:03.607042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.608 [2024-07-22 11:15:03.607046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.608 [2024-07-22 11:15:03.607059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 01:17:58.608 [2024-07-22 11:15:03.607067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:17:58.608 [2024-07-22 11:15:03.607070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x113b6e0) 01:17:58.608 [2024-07-22 11:15:03.607078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:58.608 [2024-07-22 11:15:03.607108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1187340, cid 3, qid 0 01:17:58.608 [2024-07-22 11:15:03.607167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:17:58.608 [2024-07-22 11:15:03.607174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:17:58.608 [2024-07-22 11:15:03.607177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:17:58.608 [2024-07-22 11:15:03.607181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1187340) on tqpair=0x113b6e0 01:17:58.608 [2024-07-22 11:15:03.607189] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 01:17:58.608 0% 01:17:58.608 Data Units Read: 0 01:17:58.608 Data Units Written: 0 01:17:58.608 Host Read Commands: 0 01:17:58.608 Host Write Commands: 0 01:17:58.608 Controller Busy Time: 0 minutes 01:17:58.608 Power Cycles: 0 01:17:58.608 Power On Hours: 0 hours 01:17:58.608 Unsafe Shutdowns: 0 01:17:58.608 Unrecoverable Media Errors: 0 01:17:58.608 Lifetime Error Log Entries: 0 01:17:58.608 Warning Temperature Time: 0 minutes 01:17:58.608 Critical Temperature Time: 0 minutes 01:17:58.608 01:17:58.608 Number of Queues 01:17:58.608 ================ 01:17:58.608 Number of I/O Submission Queues: 127 01:17:58.608 Number of I/O Completion Queues: 127 01:17:58.608 01:17:58.608 Active Namespaces 01:17:58.608 ================= 01:17:58.608 Namespace ID:1 01:17:58.608 Error Recovery Timeout: Unlimited 01:17:58.608 Command Set Identifier: NVM (00h) 01:17:58.608 Deallocate: Supported 01:17:58.608 Deallocated/Unwritten Error: Not Supported 01:17:58.608 Deallocated Read Value: Unknown 01:17:58.608 Deallocate in Write Zeroes: Not Supported 01:17:58.608 Deallocated Guard Field: 0xFFFF 01:17:58.608 Flush: Supported 01:17:58.608 Reservation: Supported 01:17:58.608 Namespace Sharing Capabilities: Multiple Controllers 01:17:58.608 Size (in LBAs): 131072 (0GiB) 01:17:58.608 Capacity (in LBAs): 131072 (0GiB) 01:17:58.608 Utilization (in LBAs): 131072 (0GiB) 01:17:58.608 NGUID: ABCDEF0123456789ABCDEF0123456789 01:17:58.608 EUI64: ABCDEF0123456789 01:17:58.608 UUID: 2329c190-9d00-45c0-814c-7c31bc1ebdc8 01:17:58.608 Thin Provisioning: Not Supported 01:17:58.608 Per-NS Atomic Units: Yes 01:17:58.608 Atomic Boundary Size (Normal): 0 01:17:58.608 Atomic Boundary Size (PFail): 0 01:17:58.608 Atomic Boundary Offset: 0 01:17:58.608 Maximum Single Source Range Length: 65535 01:17:58.608 Maximum Copy Length: 65535 01:17:58.608 Maximum Source Range Count: 1 01:17:58.608 NGUID/EUI64 Never Reused: No 01:17:58.608 Namespace Write Protected: No 01:17:58.608 Number of LBA Formats: 1 01:17:58.608 Current LBA Format: LBA Format #00 01:17:58.608 LBA Format #00: Data Size: 512 Metadata Size: 0 01:17:58.608 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:17:58.608 rmmod nvme_tcp 01:17:58.608 rmmod nvme_fabrics 01:17:58.608 rmmod nvme_keyring 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 105024 ']' 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 105024 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 105024 ']' 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 105024 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105024 01:17:58.608 killing process with pid 105024 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105024' 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 105024 01:17:58.608 11:15:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 105024 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:58.867 11:15:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:59.126 11:15:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:17:59.126 01:17:59.126 real 0m2.610s 01:17:59.126 user 0m7.428s 01:17:59.126 sys 0m0.707s 01:17:59.126 11:15:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 01:17:59.126 11:15:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:17:59.126 ************************************ 01:17:59.126 END TEST nvmf_identify 01:17:59.126 ************************************ 01:17:59.126 11:15:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:17:59.126 11:15:04 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:17:59.126 11:15:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:17:59.126 11:15:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:17:59.126 11:15:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:59.126 ************************************ 01:17:59.126 START TEST nvmf_perf 01:17:59.126 ************************************ 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:17:59.126 * Looking for test storage... 01:17:59.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:59.126 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:17:59.127 Cannot find device "nvmf_tgt_br" 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:17:59.127 Cannot find device "nvmf_tgt_br2" 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:17:59.127 Cannot find device "nvmf_tgt_br" 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 01:17:59.127 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:17:59.384 Cannot find device "nvmf_tgt_br2" 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:59.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:59.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:17:59.384 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:17:59.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:59.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 01:17:59.642 01:17:59.642 --- 10.0.0.2 ping statistics --- 01:17:59.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:59.642 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:17:59.642 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:59.642 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 01:17:59.642 01:17:59.642 --- 10.0.0.3 ping statistics --- 01:17:59.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:59.642 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:59.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:59.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:17:59.642 01:17:59.642 --- 10.0.0.1 ping statistics --- 01:17:59.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:59.642 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=105250 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 105250 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 105250 ']' 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:59.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 01:17:59.642 11:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:17:59.642 [2024-07-22 11:15:04.729744] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:17:59.642 [2024-07-22 11:15:04.729828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:59.900 [2024-07-22 11:15:04.874175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:17:59.900 [2024-07-22 11:15:04.972262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:59.900 [2024-07-22 11:15:04.972583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:59.900 [2024-07-22 11:15:04.972858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:59.900 [2024-07-22 11:15:04.973176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:59.900 [2024-07-22 11:15:04.973386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:59.900 [2024-07-22 11:15:04.973774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:17:59.900 [2024-07-22 11:15:04.973886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:17:59.900 [2024-07-22 11:15:04.974014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:17:59.900 [2024-07-22 11:15:04.974025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:17:59.900 11:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:17:59.900 11:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 01:17:59.900 11:15:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:18:00.159 11:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 01:18:00.159 11:15:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:18:00.159 11:15:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:00.159 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:18:00.159 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 01:18:00.417 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 01:18:00.417 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 01:18:00.675 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 01:18:00.675 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:18:00.933 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 01:18:00.933 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 01:18:00.933 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 01:18:00.933 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 01:18:00.933 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:18:01.191 [2024-07-22 11:15:06.346226] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:01.191 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:18:01.448 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:18:01.448 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:18:02.013 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:18:02.013 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:18:02.013 11:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:18:02.271 [2024-07-22 11:15:07.312114] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:18:02.271 11:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:18:02.529 11:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:18:02.529 11:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:18:02.529 11:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 01:18:02.529 11:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:18:03.459 Initializing NVMe Controllers 01:18:03.459 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:18:03.459 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:18:03.459 Initialization complete. Launching workers. 01:18:03.459 ======================================================== 01:18:03.459 Latency(us) 01:18:03.459 Device Information : IOPS MiB/s Average min max 01:18:03.459 PCIE (0000:00:10.0) NSID 1 from core 0: 20766.80 81.12 1540.90 340.67 8512.74 01:18:03.459 ======================================================== 01:18:03.459 Total : 20766.80 81.12 1540.90 340.67 8512.74 01:18:03.459 01:18:03.459 11:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:04.830 Initializing NVMe Controllers 01:18:04.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:04.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:18:04.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:18:04.830 Initialization complete. Launching workers. 01:18:04.830 ======================================================== 01:18:04.830 Latency(us) 01:18:04.830 Device Information : IOPS MiB/s Average min max 01:18:04.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3291.00 12.86 303.61 106.50 7155.77 01:18:04.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8186.35 7008.02 15025.06 01:18:04.830 ======================================================== 01:18:04.830 Total : 3414.00 13.34 587.61 106.50 15025.06 01:18:04.830 01:18:04.830 11:15:09 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:06.205 Initializing NVMe Controllers 01:18:06.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:06.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:18:06.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:18:06.205 Initialization complete. Launching workers. 01:18:06.205 ======================================================== 01:18:06.205 Latency(us) 01:18:06.205 Device Information : IOPS MiB/s Average min max 01:18:06.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9829.99 38.40 3259.20 625.84 10720.27 01:18:06.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2692.00 10.52 12002.89 6742.27 24124.11 01:18:06.205 ======================================================== 01:18:06.206 Total : 12521.99 48.91 5138.94 625.84 24124.11 01:18:06.206 01:18:06.206 11:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 01:18:06.206 11:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:08.763 Initializing NVMe Controllers 01:18:08.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:08.763 Controller IO queue size 128, less than required. 01:18:08.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:08.763 Controller IO queue size 128, less than required. 01:18:08.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:08.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:18:08.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:18:08.763 Initialization complete. Launching workers. 01:18:08.763 ======================================================== 01:18:08.763 Latency(us) 01:18:08.763 Device Information : IOPS MiB/s Average min max 01:18:08.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1903.61 475.90 68068.65 44078.24 121786.53 01:18:08.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 591.88 147.97 224352.13 99646.01 377568.83 01:18:08.763 ======================================================== 01:18:08.763 Total : 2495.48 623.87 105135.89 44078.24 377568.83 01:18:08.763 01:18:08.763 11:15:13 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 01:18:08.763 Initializing NVMe Controllers 01:18:08.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:08.763 Controller IO queue size 128, less than required. 01:18:08.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:08.763 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 01:18:08.763 Controller IO queue size 128, less than required. 01:18:08.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:08.763 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 01:18:08.763 WARNING: Some requested NVMe devices were skipped 01:18:08.763 No valid NVMe controllers or AIO or URING devices found 01:18:08.763 11:15:13 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 01:18:11.292 Initializing NVMe Controllers 01:18:11.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:11.292 Controller IO queue size 128, less than required. 01:18:11.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:11.292 Controller IO queue size 128, less than required. 01:18:11.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:11.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:18:11.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:18:11.292 Initialization complete. Launching workers. 01:18:11.292 01:18:11.292 ==================== 01:18:11.292 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 01:18:11.292 TCP transport: 01:18:11.292 polls: 8443 01:18:11.292 idle_polls: 5453 01:18:11.292 sock_completions: 2990 01:18:11.292 nvme_completions: 4165 01:18:11.292 submitted_requests: 6210 01:18:11.292 queued_requests: 1 01:18:11.292 01:18:11.292 ==================== 01:18:11.292 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 01:18:11.292 TCP transport: 01:18:11.292 polls: 11497 01:18:11.292 idle_polls: 8635 01:18:11.292 sock_completions: 2862 01:18:11.292 nvme_completions: 5887 01:18:11.292 submitted_requests: 8862 01:18:11.292 queued_requests: 1 01:18:11.292 ======================================================== 01:18:11.292 Latency(us) 01:18:11.292 Device Information : IOPS MiB/s Average min max 01:18:11.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1040.91 260.23 125878.73 85412.98 185651.94 01:18:11.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1471.37 367.84 87494.20 48804.19 148167.14 01:18:11.292 ======================================================== 01:18:11.292 Total : 2512.28 628.07 103398.00 48804.19 185651.94 01:18:11.293 01:18:11.293 11:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 01:18:11.293 11:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=30b7c985-0b00-4ccd-9200-a31bf2cfaec9 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 30b7c985-0b00-4ccd-9200-a31bf2cfaec9 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=30b7c985-0b00-4ccd-9200-a31bf2cfaec9 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 01:18:11.859 11:15:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:18:12.117 11:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:18:12.117 { 01:18:12.117 "base_bdev": "Nvme0n1", 01:18:12.117 "block_size": 4096, 01:18:12.117 "cluster_size": 4194304, 01:18:12.117 "free_clusters": 1278, 01:18:12.117 "name": "lvs_0", 01:18:12.117 "total_data_clusters": 1278, 01:18:12.117 "uuid": "30b7c985-0b00-4ccd-9200-a31bf2cfaec9" 01:18:12.117 } 01:18:12.117 ]' 01:18:12.117 11:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="30b7c985-0b00-4ccd-9200-a31bf2cfaec9") .free_clusters' 01:18:12.117 11:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 01:18:12.117 11:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="30b7c985-0b00-4ccd-9200-a31bf2cfaec9") .cluster_size' 01:18:12.375 5112 01:18:12.375 11:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 01:18:12.375 11:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 01:18:12.375 11:15:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 01:18:12.375 11:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 01:18:12.375 11:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 30b7c985-0b00-4ccd-9200-a31bf2cfaec9 lbd_0 5112 01:18:12.633 11:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=841d6b6c-683e-4ed7-a91d-557fa0b2a936 01:18:12.633 11:15:17 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 841d6b6c-683e-4ed7-a91d-557fa0b2a936 lvs_n_0 01:18:12.891 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c9c711e1-660e-4710-bae2-afb124085e3d 01:18:12.891 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c9c711e1-660e-4710-bae2-afb124085e3d 01:18:12.891 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c9c711e1-660e-4710-bae2-afb124085e3d 01:18:12.891 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 01:18:12.891 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 01:18:12.891 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 01:18:12.891 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:18:13.149 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:18:13.149 { 01:18:13.149 "base_bdev": "Nvme0n1", 01:18:13.149 "block_size": 4096, 01:18:13.149 "cluster_size": 4194304, 01:18:13.149 "free_clusters": 0, 01:18:13.149 "name": "lvs_0", 01:18:13.149 "total_data_clusters": 1278, 01:18:13.149 "uuid": "30b7c985-0b00-4ccd-9200-a31bf2cfaec9" 01:18:13.149 }, 01:18:13.149 { 01:18:13.149 "base_bdev": "841d6b6c-683e-4ed7-a91d-557fa0b2a936", 01:18:13.149 "block_size": 4096, 01:18:13.149 "cluster_size": 4194304, 01:18:13.149 "free_clusters": 1276, 01:18:13.149 "name": "lvs_n_0", 01:18:13.149 "total_data_clusters": 1276, 01:18:13.150 "uuid": "c9c711e1-660e-4710-bae2-afb124085e3d" 01:18:13.150 } 01:18:13.150 ]' 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c9c711e1-660e-4710-bae2-afb124085e3d") .free_clusters' 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c9c711e1-660e-4710-bae2-afb124085e3d") .cluster_size' 01:18:13.150 5104 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 01:18:13.150 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c9c711e1-660e-4710-bae2-afb124085e3d lbd_nest_0 5104 01:18:13.408 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2ec2a81a-991a-4f05-8208-bc381d0b2dc7 01:18:13.408 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:18:13.666 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 01:18:13.666 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2ec2a81a-991a-4f05-8208-bc381d0b2dc7 01:18:13.925 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:18:14.183 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 01:18:14.183 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 01:18:14.183 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:18:14.183 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:18:14.183 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:14.442 Initializing NVMe Controllers 01:18:14.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:14.442 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:18:14.442 WARNING: Some requested NVMe devices were skipped 01:18:14.442 No valid NVMe controllers or AIO or URING devices found 01:18:14.442 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:18:14.442 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:26.645 Initializing NVMe Controllers 01:18:26.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:26.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:18:26.645 Initialization complete. Launching workers. 01:18:26.645 ======================================================== 01:18:26.645 Latency(us) 01:18:26.645 Device Information : IOPS MiB/s Average min max 01:18:26.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 839.15 104.89 1191.43 404.53 8235.10 01:18:26.645 ======================================================== 01:18:26.645 Total : 839.15 104.89 1191.43 404.53 8235.10 01:18:26.645 01:18:26.645 11:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:18:26.645 11:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:18:26.645 11:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:26.645 Initializing NVMe Controllers 01:18:26.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:26.645 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:18:26.645 WARNING: Some requested NVMe devices were skipped 01:18:26.645 No valid NVMe controllers or AIO or URING devices found 01:18:26.645 11:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:18:26.645 11:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:36.616 Initializing NVMe Controllers 01:18:36.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:36.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:18:36.616 Initialization complete. Launching workers. 01:18:36.616 ======================================================== 01:18:36.616 Latency(us) 01:18:36.616 Device Information : IOPS MiB/s Average min max 01:18:36.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1118.54 139.82 28643.29 8033.28 242120.78 01:18:36.616 ======================================================== 01:18:36.616 Total : 1118.54 139.82 28643.29 8033.28 242120.78 01:18:36.616 01:18:36.616 11:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 01:18:36.616 11:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:18:36.616 11:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:36.616 Initializing NVMe Controllers 01:18:36.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:36.616 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 01:18:36.616 WARNING: Some requested NVMe devices were skipped 01:18:36.616 No valid NVMe controllers or AIO or URING devices found 01:18:36.616 11:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 01:18:36.616 11:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:18:46.609 Initializing NVMe Controllers 01:18:46.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 01:18:46.609 Controller IO queue size 128, less than required. 01:18:46.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:46.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:18:46.609 Initialization complete. Launching workers. 01:18:46.609 ======================================================== 01:18:46.609 Latency(us) 01:18:46.609 Device Information : IOPS MiB/s Average min max 01:18:46.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3947.69 493.46 32474.68 9954.57 68714.06 01:18:46.609 ======================================================== 01:18:46.609 Total : 3947.69 493.46 32474.68 9954.57 68714.06 01:18:46.609 01:18:46.609 11:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:18:46.609 11:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ec2a81a-991a-4f05-8208-bc381d0b2dc7 01:18:46.609 11:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 01:18:46.867 11:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 841d6b6c-683e-4ed7-a91d-557fa0b2a936 01:18:47.125 11:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:18:47.383 rmmod nvme_tcp 01:18:47.383 rmmod nvme_fabrics 01:18:47.383 rmmod nvme_keyring 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 105250 ']' 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 105250 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 105250 ']' 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 105250 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105250 01:18:47.383 killing process with pid 105250 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105250' 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 105250 01:18:47.383 11:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 105250 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:18:48.754 01:18:48.754 real 0m49.729s 01:18:48.754 user 3m7.673s 01:18:48.754 sys 0m10.204s 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 01:18:48.754 11:15:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:18:48.754 ************************************ 01:18:48.754 END TEST nvmf_perf 01:18:48.754 ************************************ 01:18:48.754 11:15:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:18:48.754 11:15:53 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:18:48.754 11:15:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:18:48.754 11:15:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:18:48.754 11:15:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:18:48.754 ************************************ 01:18:48.754 START TEST nvmf_fio_host 01:18:48.754 ************************************ 01:18:49.012 11:15:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:18:49.012 * Looking for test storage... 01:18:49.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:49.012 11:15:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:18:49.013 Cannot find device "nvmf_tgt_br" 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:18:49.013 Cannot find device "nvmf_tgt_br2" 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:18:49.013 Cannot find device "nvmf_tgt_br" 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:18:49.013 Cannot find device "nvmf_tgt_br2" 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:49.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:49.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:49.013 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:18:49.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:49.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 01:18:49.271 01:18:49.271 --- 10.0.0.2 ping statistics --- 01:18:49.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:49.271 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:18:49.271 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:18:49.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:49.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 01:18:49.271 01:18:49.271 --- 10.0.0.3 ping statistics --- 01:18:49.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:49.271 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:49.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:49.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 01:18:49.272 01:18:49.272 --- 10.0.0.1 ping statistics --- 01:18:49.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:49.272 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=106191 01:18:49.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 106191 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 106191 ']' 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:18:49.272 11:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:18:49.272 [2024-07-22 11:15:54.459508] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:18:49.272 [2024-07-22 11:15:54.459591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:49.530 [2024-07-22 11:15:54.603783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:18:49.530 [2024-07-22 11:15:54.686324] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:49.530 [2024-07-22 11:15:54.686553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:49.530 [2024-07-22 11:15:54.686794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:49.530 [2024-07-22 11:15:54.687043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:49.530 [2024-07-22 11:15:54.687179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:49.530 [2024-07-22 11:15:54.687416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:18:49.530 [2024-07-22 11:15:54.687532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:18:49.530 [2024-07-22 11:15:54.687693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:18:49.530 [2024-07-22 11:15:54.687704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:18:50.462 11:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:18:50.462 11:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 01:18:50.462 11:15:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:18:50.462 [2024-07-22 11:15:55.619067] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:50.462 11:15:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 01:18:50.462 11:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 01:18:50.462 11:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:18:50.719 11:15:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:18:50.977 Malloc1 01:18:50.977 11:15:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:18:51.235 11:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:18:51.235 11:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:18:51.494 [2024-07-22 11:15:56.600754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:18:51.494 11:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 01:18:51.751 11:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:18:51.751 11:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:18:51.751 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:18:51.751 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:18:51.752 11:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:18:52.010 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:18:52.010 fio-3.35 01:18:52.010 Starting 1 thread 01:18:54.548 01:18:54.548 test: (groupid=0, jobs=1): err= 0: pid=106317: Mon Jul 22 11:15:59 2024 01:18:54.548 read: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(85.1MiB/2006msec) 01:18:54.548 slat (nsec): min=1676, max=414461, avg=2227.39, stdev=3768.64 01:18:54.548 clat (usec): min=3335, max=11530, avg=6168.79, stdev=433.36 01:18:54.548 lat (usec): min=3361, max=11532, avg=6171.02, stdev=433.06 01:18:54.548 clat percentiles (usec): 01:18:54.548 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5866], 01:18:54.548 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 01:18:54.548 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 01:18:54.548 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 9503], 99.95th=[10159], 01:18:54.548 | 99.99th=[11469] 01:18:54.548 bw ( KiB/s): min=42424, max=44208, per=99.99%, avg=43426.00, stdev=745.77, samples=4 01:18:54.548 iops : min=10606, max=11052, avg=10856.50, stdev=186.44, samples=4 01:18:54.548 write: IOPS=10.8k, BW=42.3MiB/s (44.4MB/s)(84.9MiB/2006msec); 0 zone resets 01:18:54.548 slat (nsec): min=1738, max=292180, avg=2293.43, stdev=2491.13 01:18:54.548 clat (usec): min=2670, max=10140, avg=5587.90, stdev=376.55 01:18:54.548 lat (usec): min=2684, max=10143, avg=5590.19, stdev=376.37 01:18:54.548 clat percentiles (usec): 01:18:54.548 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5145], 20.00th=[ 5342], 01:18:54.548 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5669], 01:18:54.548 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6128], 01:18:54.548 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 8225], 99.95th=[ 9503], 01:18:54.548 | 99.99th=[10028] 01:18:54.548 bw ( KiB/s): min=42880, max=43712, per=100.00%, avg=43362.00, stdev=349.35, samples=4 01:18:54.548 iops : min=10720, max=10928, avg=10840.50, stdev=87.34, samples=4 01:18:54.548 lat (msec) : 4=0.18%, 10=99.78%, 20=0.04% 01:18:54.548 cpu : usr=60.10%, sys=29.03%, ctx=11, majf=0, minf=8 01:18:54.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:18:54.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:18:54.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:18:54.548 issued rwts: total=21780,21741,0,0 short=0,0,0,0 dropped=0,0,0,0 01:18:54.548 latency : target=0, window=0, percentile=100.00%, depth=128 01:18:54.548 01:18:54.548 Run status group 0 (all jobs): 01:18:54.548 READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=85.1MiB (89.2MB), run=2006-2006msec 01:18:54.548 WRITE: bw=42.3MiB/s (44.4MB/s), 42.3MiB/s-42.3MiB/s (44.4MB/s-44.4MB/s), io=84.9MiB (89.1MB), run=2006-2006msec 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:18:54.548 11:15:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 01:18:54.548 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 01:18:54.548 fio-3.35 01:18:54.548 Starting 1 thread 01:18:57.074 01:18:57.074 test: (groupid=0, jobs=1): err= 0: pid=106360: Mon Jul 22 11:16:01 2024 01:18:57.074 read: IOPS=8569, BW=134MiB/s (140MB/s)(269MiB/2009msec) 01:18:57.074 slat (usec): min=2, max=119, avg= 3.44, stdev= 2.30 01:18:57.074 clat (usec): min=202, max=18783, avg=8807.07, stdev=2168.47 01:18:57.074 lat (usec): min=210, max=18786, avg=8810.51, stdev=2168.60 01:18:57.074 clat percentiles (usec): 01:18:57.074 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6849], 01:18:57.074 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 01:18:57.074 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11338], 95.00th=[12649], 01:18:57.074 | 99.00th=[14615], 99.50th=[15139], 99.90th=[17171], 99.95th=[18220], 01:18:57.074 | 99.99th=[18482] 01:18:57.074 bw ( KiB/s): min=64416, max=75232, per=51.46%, avg=70560.00, stdev=4547.68, samples=4 01:18:57.074 iops : min= 4026, max= 4702, avg=4410.00, stdev=284.23, samples=4 01:18:57.074 write: IOPS=4992, BW=78.0MiB/s (81.8MB/s)(144MiB/1842msec); 0 zone resets 01:18:57.074 slat (usec): min=29, max=331, avg=34.94, stdev= 9.64 01:18:57.074 clat (usec): min=2707, max=16943, avg=10720.22, stdev=2068.23 01:18:57.074 lat (usec): min=2737, max=16990, avg=10755.16, stdev=2069.99 01:18:57.074 clat percentiles (usec): 01:18:57.074 | 1.00th=[ 7046], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 8979], 01:18:57.074 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10945], 01:18:57.074 | 70.00th=[11469], 80.00th=[12387], 90.00th=[13829], 95.00th=[14877], 01:18:57.074 | 99.00th=[15926], 99.50th=[16319], 99.90th=[16712], 99.95th=[16909], 01:18:57.074 | 99.99th=[16909] 01:18:57.074 bw ( KiB/s): min=67136, max=78912, per=92.10%, avg=73568.00, stdev=4858.52, samples=4 01:18:57.074 iops : min= 4196, max= 4932, avg=4598.00, stdev=303.66, samples=4 01:18:57.074 lat (usec) : 250=0.01% 01:18:57.074 lat (msec) : 4=0.28%, 10=59.82%, 20=39.90% 01:18:57.074 cpu : usr=69.02%, sys=20.17%, ctx=25, majf=0, minf=4 01:18:57.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:18:57.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:18:57.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:18:57.074 issued rwts: total=17217,9196,0,0 short=0,0,0,0 dropped=0,0,0,0 01:18:57.074 latency : target=0, window=0, percentile=100.00%, depth=128 01:18:57.074 01:18:57.074 Run status group 0 (all jobs): 01:18:57.074 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=269MiB (282MB), run=2009-2009msec 01:18:57.074 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=144MiB (151MB), run=1842-1842msec 01:18:57.074 11:16:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:18:57.074 11:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 01:18:57.331 Nvme0n1 01:18:57.331 11:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 01:18:57.589 11:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=44d503ef-9ab7-4f5e-be5d-baf7d6a948db 01:18:57.589 11:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 44d503ef-9ab7-4f5e-be5d-baf7d6a948db 01:18:57.589 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=44d503ef-9ab7-4f5e-be5d-baf7d6a948db 01:18:57.589 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 01:18:57.589 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 01:18:57.589 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 01:18:57.589 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:18:57.863 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:18:57.863 { 01:18:57.863 "base_bdev": "Nvme0n1", 01:18:57.863 "block_size": 4096, 01:18:57.863 "cluster_size": 1073741824, 01:18:57.863 "free_clusters": 4, 01:18:57.863 "name": "lvs_0", 01:18:57.863 "total_data_clusters": 4, 01:18:57.863 "uuid": "44d503ef-9ab7-4f5e-be5d-baf7d6a948db" 01:18:57.863 } 01:18:57.863 ]' 01:18:57.863 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="44d503ef-9ab7-4f5e-be5d-baf7d6a948db") .free_clusters' 01:18:57.863 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 01:18:57.863 11:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="44d503ef-9ab7-4f5e-be5d-baf7d6a948db") .cluster_size' 01:18:57.863 4096 01:18:57.863 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 01:18:57.863 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 01:18:57.863 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 01:18:57.863 11:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 01:18:58.126 34f14d80-d0c3-4b32-9944-6ba173e7521a 01:18:58.126 11:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 01:18:58.397 11:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:18:58.655 11:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:18:58.919 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:18:58.919 fio-3.35 01:18:58.919 Starting 1 thread 01:19:01.507 01:19:01.507 test: (groupid=0, jobs=1): err= 0: pid=106518: Mon Jul 22 11:16:06 2024 01:19:01.507 read: IOPS=6141, BW=24.0MiB/s (25.2MB/s)(48.2MiB/2010msec) 01:19:01.507 slat (nsec): min=1789, max=299890, avg=3102.18, stdev=4504.06 01:19:01.507 clat (usec): min=4273, max=20178, avg=11060.20, stdev=1082.62 01:19:01.507 lat (usec): min=4280, max=20180, avg=11063.30, stdev=1082.40 01:19:01.507 clat percentiles (usec): 01:19:01.507 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 01:19:01.507 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 01:19:01.507 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12387], 95.00th=[12780], 01:19:01.507 | 99.00th=[13698], 99.50th=[14091], 99.90th=[17695], 99.95th=[19006], 01:19:01.507 | 99.99th=[20055] 01:19:01.507 bw ( KiB/s): min=23744, max=25320, per=100.00%, avg=24566.00, stdev=659.03, samples=4 01:19:01.507 iops : min= 5936, max= 6330, avg=6141.50, stdev=164.76, samples=4 01:19:01.507 write: IOPS=6127, BW=23.9MiB/s (25.1MB/s)(48.1MiB/2010msec); 0 zone resets 01:19:01.507 slat (nsec): min=1882, max=236761, avg=3252.00, stdev=4151.92 01:19:01.507 clat (usec): min=2191, max=18839, avg=9718.73, stdev=946.57 01:19:01.507 lat (usec): min=2200, max=18842, avg=9721.99, stdev=946.43 01:19:01.507 clat percentiles (usec): 01:19:01.507 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 01:19:01.507 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 01:19:01.507 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 01:19:01.507 | 99.00th=[11863], 99.50th=[12387], 99.90th=[15926], 99.95th=[17433], 01:19:01.507 | 99.99th=[18744] 01:19:01.507 bw ( KiB/s): min=24200, max=24704, per=99.97%, avg=24504.00, stdev=237.05, samples=4 01:19:01.507 iops : min= 6050, max= 6176, avg=6126.00, stdev=59.26, samples=4 01:19:01.507 lat (msec) : 4=0.04%, 10=39.27%, 20=60.67%, 50=0.02% 01:19:01.507 cpu : usr=66.80%, sys=24.69%, ctx=10, majf=0, minf=8 01:19:01.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 01:19:01.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:01.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:01.507 issued rwts: total=12345,12317,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:01.507 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:01.507 01:19:01.507 Run status group 0 (all jobs): 01:19:01.507 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.2MiB (50.6MB), run=2010-2010msec 01:19:01.507 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.1MiB (50.5MB), run=2010-2010msec 01:19:01.507 11:16:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:19:01.507 11:16:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 01:19:01.766 11:16:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=56324d0f-5dad-404a-9176-83a9702ae1a3 01:19:01.766 11:16:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 56324d0f-5dad-404a-9176-83a9702ae1a3 01:19:01.766 11:16:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=56324d0f-5dad-404a-9176-83a9702ae1a3 01:19:01.766 11:16:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 01:19:01.766 11:16:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 01:19:01.766 11:16:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 01:19:01.766 11:16:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 01:19:02.024 11:16:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 01:19:02.024 { 01:19:02.024 "base_bdev": "Nvme0n1", 01:19:02.024 "block_size": 4096, 01:19:02.024 "cluster_size": 1073741824, 01:19:02.024 "free_clusters": 0, 01:19:02.024 "name": "lvs_0", 01:19:02.024 "total_data_clusters": 4, 01:19:02.024 "uuid": "44d503ef-9ab7-4f5e-be5d-baf7d6a948db" 01:19:02.024 }, 01:19:02.024 { 01:19:02.024 "base_bdev": "34f14d80-d0c3-4b32-9944-6ba173e7521a", 01:19:02.024 "block_size": 4096, 01:19:02.024 "cluster_size": 4194304, 01:19:02.024 "free_clusters": 1022, 01:19:02.024 "name": "lvs_n_0", 01:19:02.024 "total_data_clusters": 1022, 01:19:02.024 "uuid": "56324d0f-5dad-404a-9176-83a9702ae1a3" 01:19:02.024 } 01:19:02.024 ]' 01:19:02.024 11:16:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="56324d0f-5dad-404a-9176-83a9702ae1a3") .free_clusters' 01:19:02.024 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 01:19:02.024 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="56324d0f-5dad-404a-9176-83a9702ae1a3") .cluster_size' 01:19:02.024 4088 01:19:02.024 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 01:19:02.024 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 01:19:02.024 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 01:19:02.024 11:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 01:19:02.282 b451961d-1c09-42f6-9536-6ea46020d600 01:19:02.282 11:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 01:19:02.540 11:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:19:02.798 11:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:19:03.056 11:16:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 01:19:03.056 11:16:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:19:03.056 11:16:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:19:03.056 11:16:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 01:19:03.056 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:19:03.056 fio-3.35 01:19:03.056 Starting 1 thread 01:19:05.584 01:19:05.584 test: (groupid=0, jobs=1): err= 0: pid=106633: Mon Jul 22 11:16:10 2024 01:19:05.584 read: IOPS=6461, BW=25.2MiB/s (26.5MB/s)(50.7MiB/2009msec) 01:19:05.584 slat (nsec): min=1829, max=359562, avg=2877.40, stdev=4910.15 01:19:05.584 clat (usec): min=4529, max=18218, avg=10519.67, stdev=1036.41 01:19:05.584 lat (usec): min=4539, max=18220, avg=10522.54, stdev=1036.26 01:19:05.584 clat percentiles (usec): 01:19:05.584 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 01:19:05.584 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 01:19:05.584 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11863], 95.00th=[12256], 01:19:05.584 | 99.00th=[13042], 99.50th=[13435], 99.90th=[16909], 99.95th=[17957], 01:19:05.584 | 99.99th=[18220] 01:19:05.584 bw ( KiB/s): min=24552, max=26888, per=99.99%, avg=25842.00, stdev=981.83, samples=4 01:19:05.584 iops : min= 6138, max= 6722, avg=6460.50, stdev=245.46, samples=4 01:19:05.584 write: IOPS=6472, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2009msec); 0 zone resets 01:19:05.584 slat (nsec): min=1891, max=302855, avg=2987.60, stdev=3711.58 01:19:05.584 clat (usec): min=2796, max=17712, avg=9233.62, stdev=892.36 01:19:05.584 lat (usec): min=2810, max=17715, avg=9236.60, stdev=892.35 01:19:05.584 clat percentiles (usec): 01:19:05.584 | 1.00th=[ 7242], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8586], 01:19:05.584 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 01:19:05.584 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 01:19:05.584 | 99.00th=[11207], 99.50th=[11469], 99.90th=[16909], 99.95th=[17171], 01:19:05.584 | 99.99th=[17695] 01:19:05.584 bw ( KiB/s): min=25720, max=25992, per=99.93%, avg=25874.00, stdev=135.67, samples=4 01:19:05.584 iops : min= 6430, max= 6498, avg=6468.50, stdev=33.92, samples=4 01:19:05.584 lat (msec) : 4=0.03%, 10=57.30%, 20=42.67% 01:19:05.584 cpu : usr=67.48%, sys=24.30%, ctx=4, majf=0, minf=8 01:19:05.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:19:05.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:05.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:05.584 issued rwts: total=12981,13004,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:05.584 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:05.584 01:19:05.584 Run status group 0 (all jobs): 01:19:05.584 READ: bw=25.2MiB/s (26.5MB/s), 25.2MiB/s-25.2MiB/s (26.5MB/s-26.5MB/s), io=50.7MiB (53.2MB), run=2009-2009msec 01:19:05.584 WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.3MB), run=2009-2009msec 01:19:05.584 11:16:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:19:05.584 11:16:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 01:19:05.584 11:16:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 01:19:05.852 11:16:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 01:19:06.110 11:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 01:19:06.369 11:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 01:19:06.369 11:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:19:06.627 rmmod nvme_tcp 01:19:06.627 rmmod nvme_fabrics 01:19:06.627 rmmod nvme_keyring 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 106191 ']' 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 106191 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 106191 ']' 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 106191 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:06.627 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106191 01:19:06.885 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:19:06.885 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:19:06.885 killing process with pid 106191 01:19:06.885 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106191' 01:19:06.885 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 106191 01:19:06.885 11:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 106191 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:06.885 11:16:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:06.886 11:16:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:19:07.144 ************************************ 01:19:07.144 END TEST nvmf_fio_host 01:19:07.144 ************************************ 01:19:07.144 01:19:07.144 real 0m18.136s 01:19:07.144 user 1m19.394s 01:19:07.144 sys 0m4.574s 01:19:07.144 11:16:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 01:19:07.144 11:16:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:19:07.144 11:16:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:19:07.144 11:16:12 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:19:07.144 11:16:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:19:07.144 11:16:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:19:07.144 11:16:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:07.144 ************************************ 01:19:07.144 START TEST nvmf_failover 01:19:07.144 ************************************ 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:19:07.144 * Looking for test storage... 01:19:07.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:19:07.144 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:19:07.145 Cannot find device "nvmf_tgt_br" 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:19:07.145 Cannot find device "nvmf_tgt_br2" 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:19:07.145 Cannot find device "nvmf_tgt_br" 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:19:07.145 Cannot find device "nvmf_tgt_br2" 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:19:07.145 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:07.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:07.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:19:07.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:07.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 01:19:07.403 01:19:07.403 --- 10.0.0.2 ping statistics --- 01:19:07.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:07.403 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:19:07.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:07.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 01:19:07.403 01:19:07.403 --- 10.0.0.3 ping statistics --- 01:19:07.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:07.403 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:07.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:07.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:19:07.403 01:19:07.403 --- 10.0.0.1 ping statistics --- 01:19:07.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:07.403 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=106895 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 106895 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 106895 ']' 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:07.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:07.403 11:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:19:07.662 [2024-07-22 11:16:12.631300] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:07.662 [2024-07-22 11:16:12.631362] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:07.662 [2024-07-22 11:16:12.768350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 01:19:07.662 [2024-07-22 11:16:12.849579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:07.662 [2024-07-22 11:16:12.849649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:07.662 [2024-07-22 11:16:12.849674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:07.662 [2024-07-22 11:16:12.849685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:07.662 [2024-07-22 11:16:12.849694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:07.662 [2024-07-22 11:16:12.849835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:19:07.662 [2024-07-22 11:16:12.850426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:19:07.662 [2024-07-22 11:16:12.850480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:19:08.597 11:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:08.597 11:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:19:08.597 11:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:19:08.597 11:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 01:19:08.597 11:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:19:08.597 11:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:08.597 11:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:19:08.597 [2024-07-22 11:16:13.791625] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:08.855 11:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:19:08.855 Malloc0 01:19:08.855 11:16:14 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:19:09.114 11:16:14 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:09.681 11:16:14 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:09.681 [2024-07-22 11:16:14.776180] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:09.681 11:16:14 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:19:09.940 [2024-07-22 11:16:14.968336] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:19:09.940 11:16:14 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:19:10.199 [2024-07-22 11:16:15.168606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 01:19:10.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=107007 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 107007 /var/tmp/bdevperf.sock 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107007 ']' 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:10.199 11:16:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:19:11.133 11:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:11.133 11:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:19:11.133 11:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:11.390 NVMe0n1 01:19:11.390 11:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:11.648 01:19:11.648 11:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=107053 01:19:11.648 11:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:19:11.648 11:16:16 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 01:19:12.581 11:16:17 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:12.839 [2024-07-22 11:16:17.954397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.839 [2024-07-22 11:16:17.954561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 [2024-07-22 11:16:17.954568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 [2024-07-22 11:16:17.954576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 [2024-07-22 11:16:17.954583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 [2024-07-22 11:16:17.954591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 [2024-07-22 11:16:17.954598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 [2024-07-22 11:16:17.954618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 [2024-07-22 11:16:17.955051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79b90 is same with the state(5) to be set 01:19:12.840 11:16:17 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 01:19:16.123 11:16:20 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:16.123 01:19:16.123 11:16:21 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:19:16.382 [2024-07-22 11:16:21.521459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.382 [2024-07-22 11:16:21.521521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.382 [2024-07-22 11:16:21.521531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.382 [2024-07-22 11:16:21.521539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.382 [2024-07-22 11:16:21.521547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.382 [2024-07-22 11:16:21.521554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.382 [2024-07-22 11:16:21.521562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 [2024-07-22 11:16:21.521690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b2a0 is same with the state(5) to be set 01:19:16.383 11:16:21 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 01:19:19.667 11:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:19.667 [2024-07-22 11:16:24.794045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:19.667 11:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 01:19:21.043 11:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:19:21.043 [2024-07-22 11:16:26.048650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.043 [2024-07-22 11:16:26.048830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.048988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 [2024-07-22 11:16:26.049229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7bac0 is same with the state(5) to be set 01:19:21.044 11:16:26 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 107053 01:19:27.613 0 01:19:27.613 11:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 107007 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107007 ']' 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107007 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107007 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:19:27.614 killing process with pid 107007 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107007' 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107007 01:19:27.614 11:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107007 01:19:27.614 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:19:27.614 [2024-07-22 11:16:15.231350] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:27.614 [2024-07-22 11:16:15.231431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107007 ] 01:19:27.614 [2024-07-22 11:16:15.365288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:27.614 [2024-07-22 11:16:15.442717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:19:27.614 Running I/O for 15 seconds... 01:19:27.614 [2024-07-22 11:16:17.955590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.955968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.955981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.614 [2024-07-22 11:16:17.956573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.614 [2024-07-22 11:16:17.956718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.614 [2024-07-22 11:16:17.956737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.956982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.615 [2024-07-22 11:16:17.957482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.615 [2024-07-22 11:16:17.957920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.615 [2024-07-22 11:16:17.957931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.957944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.957955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.957984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.616 [2024-07-22 11:16:17.958905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.958929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.958953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.958966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.958998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.616 [2024-07-22 11:16:17.959290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.616 [2024-07-22 11:16:17.959302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:17.959327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:17.959367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:17.959398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:17.959422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:17.959446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a1c0 is same with the state(5) to be set 01:19:27.617 [2024-07-22 11:16:17.959473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.617 [2024-07-22 11:16:17.959482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.617 [2024-07-22 11:16:17.959491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102440 len:8 PRP1 0x0 PRP2 0x0 01:19:27.617 [2024-07-22 11:16:17.959502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959555] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170a1c0 was disconnected and freed. reset controller. 01:19:27.617 [2024-07-22 11:16:17.959576] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 01:19:27.617 [2024-07-22 11:16:17.959655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.617 [2024-07-22 11:16:17.959691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.617 [2024-07-22 11:16:17.959717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.617 [2024-07-22 11:16:17.959740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.617 [2024-07-22 11:16:17.959764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:17.959776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:27.617 [2024-07-22 11:16:17.963133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:27.617 [2024-07-22 11:16:17.963167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ec110 (9): Bad file descriptor 01:19:27.617 [2024-07-22 11:16:17.998309] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:19:27.617 [2024-07-22 11:16:21.521995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.617 [2024-07-22 11:16:21.522472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.617 [2024-07-22 11:16:21.522760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.617 [2024-07-22 11:16:21.522771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.522981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.522995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.618 [2024-07-22 11:16:21.523382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.618 [2024-07-22 11:16:21.523727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.618 [2024-07-22 11:16:21.523739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.523977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.523990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.619 [2024-07-22 11:16:21.524691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.619 [2024-07-22 11:16:21.524854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.619 [2024-07-22 11:16:21.524865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.524877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.620 [2024-07-22 11:16:21.524889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.524921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.524935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36656 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.524946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.524961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.524970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.524989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36664 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36672 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36680 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36688 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36696 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36704 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36712 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36720 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36728 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36736 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36744 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36752 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36760 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36768 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36776 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36784 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36792 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36800 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36808 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36816 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36824 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.620 [2024-07-22 11:16:21.525878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.620 [2024-07-22 11:16:21.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36832 len:8 PRP1 0x0 PRP2 0x0 01:19:27.620 [2024-07-22 11:16:21.525896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.525947] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17189c0 was disconnected and freed. reset controller. 01:19:27.620 [2024-07-22 11:16:21.525974] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 01:19:27.620 [2024-07-22 11:16:21.526047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.620 [2024-07-22 11:16:21.526066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.620 [2024-07-22 11:16:21.526079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.620 [2024-07-22 11:16:21.526091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:21.526102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.621 [2024-07-22 11:16:21.526113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:21.526125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.621 [2024-07-22 11:16:21.526136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:21.526147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:27.621 [2024-07-22 11:16:21.526181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ec110 (9): Bad file descriptor 01:19:27.621 [2024-07-22 11:16:21.529466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:27.621 [2024-07-22 11:16:21.567542] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:19:27.621 [2024-07-22 11:16:26.050682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.050952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.050964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.621 [2024-07-22 11:16:26.051509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.621 [2024-07-22 11:16:26.051878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.621 [2024-07-22 11:16:26.051893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.622 [2024-07-22 11:16:26.051906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.051920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.622 [2024-07-22 11:16:26.051933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.051948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.622 [2024-07-22 11:16:26.051980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.622 [2024-07-22 11:16:26.052436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.622 [2024-07-22 11:16:26.052449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.052984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.052996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:19:27.623 [2024-07-22 11:16:26.053378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.623 [2024-07-22 11:16:26.053479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.623 [2024-07-22 11:16:26.053490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.053948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.053961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:19:27.624 [2024-07-22 11:16:26.054258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55808 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.624 [2024-07-22 11:16:26.054359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.624 [2024-07-22 11:16:26.054414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.624 [2024-07-22 11:16:26.054455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55832 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.624 [2024-07-22 11:16:26.054495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55840 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.624 [2024-07-22 11:16:26.054534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55848 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.624 [2024-07-22 11:16:26.054573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55856 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:19:27.624 [2024-07-22 11:16:26.054626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:19:27.624 [2024-07-22 11:16:26.054635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 01:19:27.624 [2024-07-22 11:16:26.054646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.624 [2024-07-22 11:16:26.054700] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x171b7c0 was disconnected and freed. reset controller. 01:19:27.624 [2024-07-22 11:16:26.054716] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 01:19:27.624 [2024-07-22 11:16:26.054767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.624 [2024-07-22 11:16:26.054786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.625 [2024-07-22 11:16:26.054799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.625 [2024-07-22 11:16:26.054811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.625 [2024-07-22 11:16:26.054823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.625 [2024-07-22 11:16:26.054834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.625 [2024-07-22 11:16:26.054846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:19:27.625 [2024-07-22 11:16:26.054857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:27.625 [2024-07-22 11:16:26.054868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:27.625 [2024-07-22 11:16:26.054899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ec110 (9): Bad file descriptor 01:19:27.625 [2024-07-22 11:16:26.058430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:27.625 [2024-07-22 11:16:26.092262] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:19:27.625 01:19:27.625 Latency(us) 01:19:27.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:27.625 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:19:27.625 Verification LBA range: start 0x0 length 0x4000 01:19:27.625 NVMe0n1 : 15.00 10708.09 41.83 245.07 0.00 11660.99 476.63 14834.97 01:19:27.625 =================================================================================================================== 01:19:27.625 Total : 10708.09 41.83 245.07 0.00 11660.99 476.63 14834.97 01:19:27.625 Received shutdown signal, test time was about 15.000000 seconds 01:19:27.625 01:19:27.625 Latency(us) 01:19:27.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:27.625 =================================================================================================================== 01:19:27.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=107251 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 107251 /var/tmp/bdevperf.sock 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107251 ']' 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:19:27.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:19:27.625 [2024-07-22 11:16:32.577241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 01:19:27.625 [2024-07-22 11:16:32.785464] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 01:19:27.625 11:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:27.884 NVMe0n1 01:19:27.884 11:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:28.142 01:19:28.142 11:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:28.709 01:19:28.709 11:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 01:19:28.709 11:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:19:28.709 11:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:28.966 11:16:34 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 01:19:32.247 11:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:19:32.247 11:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 01:19:32.247 11:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:19:32.247 11:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=107369 01:19:32.247 11:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 107369 01:19:33.184 0 01:19:33.185 11:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:19:33.185 [2024-07-22 11:16:32.090011] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:33.185 [2024-07-22 11:16:32.090086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107251 ] 01:19:33.185 [2024-07-22 11:16:32.219158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:33.185 [2024-07-22 11:16:32.280414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:19:33.185 [2024-07-22 11:16:34.023595] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 01:19:33.185 [2024-07-22 11:16:34.023754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:19:33.185 [2024-07-22 11:16:34.023779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:33.185 [2024-07-22 11:16:34.023799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:19:33.185 [2024-07-22 11:16:34.023813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:33.185 [2024-07-22 11:16:34.023827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:19:33.185 [2024-07-22 11:16:34.023840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:33.185 [2024-07-22 11:16:34.023854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:19:33.185 [2024-07-22 11:16:34.023868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:33.185 [2024-07-22 11:16:34.023881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:19:33.185 [2024-07-22 11:16:34.023934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:19:33.185 [2024-07-22 11:16:34.023963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b1110 (9): Bad file descriptor 01:19:33.185 [2024-07-22 11:16:34.030629] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:19:33.185 Running I/O for 1 seconds... 01:19:33.185 01:19:33.185 Latency(us) 01:19:33.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:33.185 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:19:33.185 Verification LBA range: start 0x0 length 0x4000 01:19:33.185 NVMe0n1 : 1.01 10191.17 39.81 0.00 0.00 12502.94 1697.98 12511.42 01:19:33.185 =================================================================================================================== 01:19:33.185 Total : 10191.17 39.81 0.00 0.00 12502.94 1697.98 12511.42 01:19:33.185 11:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:19:33.185 11:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 01:19:33.443 11:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:33.702 11:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:19:33.702 11:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 01:19:33.961 11:16:39 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:19:34.221 11:16:39 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 107251 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107251 ']' 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107251 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107251 01:19:37.569 killing process with pid 107251 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107251' 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107251 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107251 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 01:19:37.569 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 01:19:37.827 11:16:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:19:37.827 rmmod nvme_tcp 01:19:37.827 rmmod nvme_fabrics 01:19:37.827 rmmod nvme_keyring 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 106895 ']' 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 106895 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 106895 ']' 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 106895 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106895 01:19:37.827 killing process with pid 106895 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106895' 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 106895 01:19:37.827 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 106895 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:19:38.393 01:19:38.393 real 0m31.216s 01:19:38.393 user 2m0.528s 01:19:38.393 sys 0m4.480s 01:19:38.393 ************************************ 01:19:38.393 END TEST nvmf_failover 01:19:38.393 ************************************ 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 01:19:38.393 11:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:19:38.393 11:16:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:19:38.393 11:16:43 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:19:38.393 11:16:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:19:38.393 11:16:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:19:38.393 11:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:38.393 ************************************ 01:19:38.393 START TEST nvmf_host_discovery 01:19:38.393 ************************************ 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:19:38.393 * Looking for test storage... 01:19:38.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:38.393 11:16:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:19:38.394 Cannot find device "nvmf_tgt_br" 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:19:38.394 Cannot find device "nvmf_tgt_br2" 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:19:38.394 Cannot find device "nvmf_tgt_br" 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 01:19:38.394 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:19:38.672 Cannot find device "nvmf_tgt_br2" 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:38.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:38.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:19:38.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:38.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 01:19:38.672 01:19:38.672 --- 10.0.0.2 ping statistics --- 01:19:38.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:38.672 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:19:38.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:38.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 01:19:38.672 01:19:38.672 --- 10.0.0.3 ping statistics --- 01:19:38.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:38.672 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:38.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:38.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 01:19:38.672 01:19:38.672 --- 10.0.0.1 ping statistics --- 01:19:38.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:38.672 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=107671 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 107671 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 107671 ']' 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:38.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:38.672 11:16:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:38.930 [2024-07-22 11:16:43.919561] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:38.930 [2024-07-22 11:16:43.919649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:38.930 [2024-07-22 11:16:44.057372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:38.930 [2024-07-22 11:16:44.130248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:38.930 [2024-07-22 11:16:44.130311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:38.930 [2024-07-22 11:16:44.130326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:38.930 [2024-07-22 11:16:44.130337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:38.930 [2024-07-22 11:16:44.130347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:38.930 [2024-07-22 11:16:44.130377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:39.863 [2024-07-22 11:16:44.900856] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:39.863 [2024-07-22 11:16:44.908990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:39.863 null0 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:39.863 null1 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:39.863 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=107721 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 107721 /tmp/host.sock 01:19:39.863 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 107721 ']' 01:19:39.864 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:19:39.864 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:39.864 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:19:39.864 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:39.864 11:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:39.864 [2024-07-22 11:16:45.006620] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:39.864 [2024-07-22 11:16:45.006904] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107721 ] 01:19:40.122 [2024-07-22 11:16:45.151420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:40.122 [2024-07-22 11:16:45.226511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.056 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:41.057 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.315 [2024-07-22 11:16:46.361368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:19:41.315 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.316 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 01:19:41.574 11:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 01:19:41.833 [2024-07-22 11:16:47.033893] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:19:41.833 [2024-07-22 11:16:47.033921] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:19:41.833 [2024-07-22 11:16:47.033955] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:19:42.092 [2024-07-22 11:16:47.122007] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:19:42.092 [2024-07-22 11:16:47.185895] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:19:42.092 [2024-07-22 11:16:47.185924] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:42.659 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.918 [2024-07-22 11:16:47.953886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:19:42.918 [2024-07-22 11:16:47.954784] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:19:42.918 [2024-07-22 11:16:47.954819] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:42.918 11:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:42.918 [2024-07-22 11:16:48.040864] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:19:42.918 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:42.918 [2024-07-22 11:16:48.099121] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:19:42.919 [2024-07-22 11:16:48.099148] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:19:42.919 [2024-07-22 11:16:48.099155] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:19:43.177 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 01:19:43.177 11:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.112 [2024-07-22 11:16:49.254799] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:19:44.112 [2024-07-22 11:16:49.254835] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:19:44.112 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:19:44.112 [2024-07-22 11:16:49.263427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:19:44.112 [2024-07-22 11:16:49.263470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:44.112 [2024-07-22 11:16:49.263482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:19:44.112 [2024-07-22 11:16:49.263491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:44.112 [2024-07-22 11:16:49.263505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:19:44.113 [2024-07-22 11:16:49.263513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:44.113 [2024-07-22 11:16:49.263522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:19:44.113 [2024-07-22 11:16:49.263531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:19:44.113 [2024-07-22 11:16:49.263539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168eb00 is same with the state(5) to be set 01:19:44.113 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:44.113 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.113 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:44.113 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.113 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:44.113 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:44.113 [2024-07-22 11:16:49.273385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb00 (9): Bad file descriptor 01:19:44.113 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.113 [2024-07-22 11:16:49.283406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:19:44.113 [2024-07-22 11:16:49.283503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:44.113 [2024-07-22 11:16:49.283527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168eb00 with addr=10.0.0.2, port=4420 01:19:44.113 [2024-07-22 11:16:49.283538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168eb00 is same with the state(5) to be set 01:19:44.113 [2024-07-22 11:16:49.283555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb00 (9): Bad file descriptor 01:19:44.113 [2024-07-22 11:16:49.283570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:19:44.113 [2024-07-22 11:16:49.283579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:19:44.113 [2024-07-22 11:16:49.283621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:19:44.113 [2024-07-22 11:16:49.283661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:44.113 [2024-07-22 11:16:49.293461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:19:44.113 [2024-07-22 11:16:49.293546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:44.113 [2024-07-22 11:16:49.293567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168eb00 with addr=10.0.0.2, port=4420 01:19:44.113 [2024-07-22 11:16:49.293578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168eb00 is same with the state(5) to be set 01:19:44.113 [2024-07-22 11:16:49.293594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb00 (9): Bad file descriptor 01:19:44.113 [2024-07-22 11:16:49.293608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:19:44.113 [2024-07-22 11:16:49.293616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:19:44.113 [2024-07-22 11:16:49.293625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:19:44.113 [2024-07-22 11:16:49.293639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:44.113 [2024-07-22 11:16:49.303513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:19:44.113 [2024-07-22 11:16:49.303600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:44.113 [2024-07-22 11:16:49.303635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168eb00 with addr=10.0.0.2, port=4420 01:19:44.113 [2024-07-22 11:16:49.303648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168eb00 is same with the state(5) to be set 01:19:44.113 [2024-07-22 11:16:49.303664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb00 (9): Bad file descriptor 01:19:44.113 [2024-07-22 11:16:49.303678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:19:44.113 [2024-07-22 11:16:49.303687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:19:44.113 [2024-07-22 11:16:49.303695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:19:44.113 [2024-07-22 11:16:49.303709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:44.113 [2024-07-22 11:16:49.313567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:19:44.113 [2024-07-22 11:16:49.313647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:44.113 [2024-07-22 11:16:49.313669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168eb00 with addr=10.0.0.2, port=4420 01:19:44.113 [2024-07-22 11:16:49.313681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168eb00 is same with the state(5) to be set 01:19:44.113 [2024-07-22 11:16:49.313697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb00 (9): Bad file descriptor 01:19:44.113 [2024-07-22 11:16:49.313711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:19:44.113 [2024-07-22 11:16:49.313720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:19:44.113 [2024-07-22 11:16:49.313728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:19:44.113 [2024-07-22 11:16:49.313742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:19:44.372 [2024-07-22 11:16:49.323617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:19:44.372 [2024-07-22 11:16:49.323742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:44.372 [2024-07-22 11:16:49.323765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168eb00 with addr=10.0.0.2, port=4420 01:19:44.372 [2024-07-22 11:16:49.323776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168eb00 is same with the state(5) to be set 01:19:44.372 [2024-07-22 11:16:49.323793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb00 (9): Bad file descriptor 01:19:44.372 [2024-07-22 11:16:49.323809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:19:44.372 [2024-07-22 11:16:49.323819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:19:44.372 [2024-07-22 11:16:49.323828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:19:44.372 [2024-07-22 11:16:49.323843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:44.372 [2024-07-22 11:16:49.333708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:19:44.372 [2024-07-22 11:16:49.333800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:44.372 [2024-07-22 11:16:49.333822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168eb00 with addr=10.0.0.2, port=4420 01:19:44.372 [2024-07-22 11:16:49.333832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168eb00 is same with the state(5) to be set 01:19:44.372 [2024-07-22 11:16:49.333849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168eb00 (9): Bad file descriptor 01:19:44.372 [2024-07-22 11:16:49.333863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:19:44.372 [2024-07-22 11:16:49.333872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:19:44.372 [2024-07-22 11:16:49.333880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:19:44.372 [2024-07-22 11:16:49.333895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:19:44.372 [2024-07-22 11:16:49.340120] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 01:19:44.372 [2024-07-22 11:16:49.340150] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.372 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:44.630 11:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:45.565 [2024-07-22 11:16:50.684133] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:19:45.565 [2024-07-22 11:16:50.684161] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:19:45.565 [2024-07-22 11:16:50.684178] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:19:45.565 [2024-07-22 11:16:50.770219] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 01:19:45.824 [2024-07-22 11:16:50.829893] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:19:45.824 [2024-07-22 11:16:50.829934] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:45.824 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:45.824 2024/07/22 11:16:50 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:19:45.825 request: 01:19:45.825 { 01:19:45.825 "method": "bdev_nvme_start_discovery", 01:19:45.825 "params": { 01:19:45.825 "name": "nvme", 01:19:45.825 "trtype": "tcp", 01:19:45.825 "traddr": "10.0.0.2", 01:19:45.825 "adrfam": "ipv4", 01:19:45.825 "trsvcid": "8009", 01:19:45.825 "hostnqn": "nqn.2021-12.io.spdk:test", 01:19:45.825 "wait_for_attach": true 01:19:45.825 } 01:19:45.825 } 01:19:45.825 Got JSON-RPC error response 01:19:45.825 GoRPCClient: error on JSON-RPC call 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:45.825 2024/07/22 11:16:50 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:19:45.825 request: 01:19:45.825 { 01:19:45.825 "method": "bdev_nvme_start_discovery", 01:19:45.825 "params": { 01:19:45.825 "name": "nvme_second", 01:19:45.825 "trtype": "tcp", 01:19:45.825 "traddr": "10.0.0.2", 01:19:45.825 "adrfam": "ipv4", 01:19:45.825 "trsvcid": "8009", 01:19:45.825 "hostnqn": "nqn.2021-12.io.spdk:test", 01:19:45.825 "wait_for_attach": true 01:19:45.825 } 01:19:45.825 } 01:19:45.825 Got JSON-RPC error response 01:19:45.825 GoRPCClient: error on JSON-RPC call 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:19:45.825 11:16:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:46.084 11:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:47.016 [2024-07-22 11:16:52.095443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:47.016 [2024-07-22 11:16:52.095492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ab350 with addr=10.0.0.2, port=8010 01:19:47.016 [2024-07-22 11:16:52.095509] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:19:47.016 [2024-07-22 11:16:52.095518] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:19:47.016 [2024-07-22 11:16:52.095526] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:19:47.948 [2024-07-22 11:16:53.095437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:19:47.948 [2024-07-22 11:16:53.095483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ab350 with addr=10.0.0.2, port=8010 01:19:47.948 [2024-07-22 11:16:53.095502] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:19:47.948 [2024-07-22 11:16:53.095511] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:19:47.948 [2024-07-22 11:16:53.095519] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 01:19:49.323 [2024-07-22 11:16:54.095367] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 01:19:49.323 2024/07/22 11:16:54 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 01:19:49.323 request: 01:19:49.323 { 01:19:49.323 "method": "bdev_nvme_start_discovery", 01:19:49.323 "params": { 01:19:49.323 "name": "nvme_second", 01:19:49.323 "trtype": "tcp", 01:19:49.323 "traddr": "10.0.0.2", 01:19:49.323 "adrfam": "ipv4", 01:19:49.323 "trsvcid": "8010", 01:19:49.323 "hostnqn": "nqn.2021-12.io.spdk:test", 01:19:49.323 "wait_for_attach": false, 01:19:49.323 "attach_timeout_ms": 3000 01:19:49.323 } 01:19:49.323 } 01:19:49.323 Got JSON-RPC error response 01:19:49.323 GoRPCClient: error on JSON-RPC call 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 107721 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:19:49.323 rmmod nvme_tcp 01:19:49.323 rmmod nvme_fabrics 01:19:49.323 rmmod nvme_keyring 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 107671 ']' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 107671 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 107671 ']' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 107671 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107671 01:19:49.323 killing process with pid 107671 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107671' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 107671 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 107671 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:49.323 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:19:49.582 01:19:49.582 real 0m11.144s 01:19:49.582 user 0m22.013s 01:19:49.582 sys 0m1.639s 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:19:49.582 ************************************ 01:19:49.582 END TEST nvmf_host_discovery 01:19:49.582 ************************************ 01:19:49.582 11:16:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:19:49.582 11:16:54 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:19:49.582 11:16:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:19:49.582 11:16:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:19:49.582 11:16:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:49.582 ************************************ 01:19:49.582 START TEST nvmf_host_multipath_status 01:19:49.582 ************************************ 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:19:49.582 * Looking for test storage... 01:19:49.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:19:49.582 Cannot find device "nvmf_tgt_br" 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:19:49.582 Cannot find device "nvmf_tgt_br2" 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:19:49.582 Cannot find device "nvmf_tgt_br" 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 01:19:49.582 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:19:49.839 Cannot find device "nvmf_tgt_br2" 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:49.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:49.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:49.839 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:19:49.840 11:16:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:19:49.840 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:49.840 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:49.840 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:49.840 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:49.840 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:19:49.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:49.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 01:19:49.840 01:19:49.840 --- 10.0.0.2 ping statistics --- 01:19:49.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:49.840 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:19:49.840 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:19:50.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:50.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:19:50.097 01:19:50.097 --- 10.0.0.3 ping statistics --- 01:19:50.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:50.098 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:50.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:50.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 01:19:50.098 01:19:50.098 --- 10.0.0.1 ping statistics --- 01:19:50.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:50.098 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=108198 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 108198 01:19:50.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108198 ']' 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:50.098 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:19:50.098 [2024-07-22 11:16:55.131150] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:19:50.098 [2024-07-22 11:16:55.131351] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:50.098 [2024-07-22 11:16:55.265179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:19:50.356 [2024-07-22 11:16:55.335039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:50.356 [2024-07-22 11:16:55.335444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:50.356 [2024-07-22 11:16:55.335592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:50.356 [2024-07-22 11:16:55.335669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:50.356 [2024-07-22 11:16:55.335793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:50.356 [2024-07-22 11:16:55.336019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:19:50.356 [2024-07-22 11:16:55.336023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=108198 01:19:50.356 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:19:50.615 [2024-07-22 11:16:55.786362] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:50.615 11:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:19:51.180 Malloc0 01:19:51.180 11:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:19:51.180 11:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:51.438 11:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:19:51.696 [2024-07-22 11:16:56.814318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:19:51.696 11:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:19:51.955 [2024-07-22 11:16:57.022429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=108288 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 108288 /var/tmp/bdevperf.sock 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108288 ']' 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:19:51.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 01:19:51.955 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:19:52.213 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:19:52.213 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 01:19:52.213 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:19:52.470 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 01:19:52.727 Nvme0n1 01:19:52.727 11:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:19:53.291 Nvme0n1 01:19:53.291 11:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:19:53.291 11:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:19:55.187 11:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:19:55.187 11:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:19:55.444 11:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:19:55.700 11:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:19:56.634 11:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:19:56.634 11:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:19:56.634 11:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:56.634 11:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:19:56.893 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:19:56.893 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:19:56.893 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:56.893 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:19:57.151 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:19:57.151 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:19:57.152 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:57.152 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:19:57.410 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:19:57.410 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:19:57.410 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:57.410 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:19:57.669 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:19:57.669 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:19:57.669 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:57.669 11:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:19:57.928 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:19:57.928 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:19:57.928 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:57.928 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:19:58.186 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:19:58.186 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:19:58.186 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:19:58.444 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:19:58.703 11:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:19:59.637 11:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:19:59.637 11:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:19:59.637 11:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:59.637 11:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:19:59.895 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:19:59.895 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:19:59.895 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:19:59.895 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:00.154 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:00.154 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:00.154 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:00.154 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:00.412 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:00.412 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:00.412 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:00.412 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:00.670 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:00.670 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:20:00.670 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:00.670 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:00.927 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:00.927 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:20:00.927 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:00.928 11:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:01.185 11:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:01.185 11:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:20:01.185 11:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:20:01.443 11:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:20:01.700 11:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:20:02.631 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:20:02.631 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:20:02.631 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:02.631 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:02.888 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:02.888 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:20:02.889 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:02.889 11:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:03.146 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:03.146 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:03.146 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:03.146 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:03.403 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:03.403 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:03.403 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:03.403 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:03.660 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:03.660 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:20:03.660 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:03.660 11:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:03.919 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:03.919 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:20:03.919 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:03.919 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:04.176 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:04.176 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:20:04.176 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:20:04.434 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:20:04.696 11:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:05.676 11:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:05.934 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:05.934 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:05.934 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:05.934 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:06.192 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:06.192 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:06.192 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:06.192 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:06.451 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:06.451 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:20:06.451 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:06.451 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:06.710 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:06.710 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:20:06.710 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:06.710 11:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:06.969 11:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:06.969 11:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:20:06.969 11:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:20:07.228 11:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:20:07.487 11:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:20:08.421 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:20:08.421 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:20:08.421 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:08.421 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:08.678 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:08.678 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:20:08.678 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:08.678 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:08.935 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:08.935 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:08.936 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:08.936 11:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:09.193 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:09.193 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:09.193 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:09.193 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:09.450 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:09.450 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:20:09.450 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:09.450 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:09.707 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:09.707 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:20:09.707 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:09.707 11:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:09.965 11:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:09.965 11:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:20:09.965 11:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:20:10.223 11:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:20:10.481 11:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:20:11.414 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:20:11.414 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:20:11.414 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:11.414 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:11.671 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:11.671 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:20:11.671 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:11.671 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:11.929 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:11.929 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:11.929 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:11.929 11:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:12.187 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:12.187 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:12.187 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:12.187 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:12.445 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:12.445 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:20:12.445 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:12.445 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:12.703 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:12.703 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:20:12.703 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:12.703 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:12.961 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:12.961 11:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:20:13.218 11:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:20:13.218 11:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 01:20:13.218 11:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:20:13.785 11:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:20:14.720 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:20:14.720 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:20:14.720 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:14.720 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:14.977 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:14.977 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:20:14.977 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:14.977 11:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:15.234 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:15.234 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:15.234 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:15.234 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:15.491 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:15.491 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:15.491 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:15.491 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:15.748 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:15.748 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:20:15.748 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:15.749 11:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:16.007 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:16.007 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:20:16.007 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:16.007 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:16.265 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:16.265 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:20:16.265 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:20:16.265 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:20:16.524 11:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:20:17.458 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:20:17.459 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:20:17.459 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:17.459 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:18.025 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:18.025 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:20:18.025 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:18.025 11:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:18.025 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:18.025 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:18.025 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:18.025 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:18.283 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:18.283 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:18.283 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:18.283 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:18.541 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:18.541 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:20:18.541 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:18.541 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:18.799 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:18.799 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:20:18.799 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:18.799 11:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:19.057 11:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:19.057 11:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:20:19.057 11:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:20:19.316 11:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 01:20:19.316 11:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:20.694 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:20.953 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:20.953 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:20.953 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:20.953 11:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:21.213 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:21.472 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:21.472 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:20:21.472 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:21.472 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:21.731 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:21.731 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:20:21.731 11:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:20:21.990 11:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:20:22.249 11:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:20:23.184 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:20:23.184 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:20:23.184 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:23.184 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:20:23.442 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:23.442 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:20:23.442 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:23.442 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:20:23.700 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:23.700 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:20:23.700 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:23.700 11:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:20:23.959 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:23.959 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:20:23.959 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:23.959 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:20:24.217 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:24.217 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:20:24.217 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:20:24.217 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:24.476 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:20:24.476 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:20:24.476 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:20:24.476 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 108288 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108288 ']' 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108288 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108288 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:20:24.734 killing process with pid 108288 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108288' 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108288 01:20:24.734 11:17:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108288 01:20:24.734 Connection closed with partial response: 01:20:24.734 01:20:24.734 01:20:24.995 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 108288 01:20:24.995 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:20:24.995 [2024-07-22 11:16:57.079694] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:20:24.995 [2024-07-22 11:16:57.079790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108288 ] 01:20:24.995 [2024-07-22 11:16:57.212766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:24.995 [2024-07-22 11:16:57.273524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:20:24.995 Running I/O for 90 seconds... 01:20:24.995 [2024-07-22 11:17:12.302261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.302471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.302512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.302545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.302578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.302610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.302643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.302676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.302690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.303164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.303192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:20:24.995 [2024-07-22 11:17:12.303219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.995 [2024-07-22 11:17:12.303237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.303960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.303976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.304033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.304087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.304125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.304163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.304201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.304241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.304323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.304952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.304983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.996 [2024-07-22 11:17:12.305718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.305754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.305788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.305824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.305859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.305894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.305929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.305957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:20:24.996 [2024-07-22 11:17:12.306618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.996 [2024-07-22 11:17:12.306641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.306687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.306740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.306782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.306822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.306863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.306903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.306944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.306985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:12.307661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:12.307722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:12.307765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:12.307808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:12.307850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:12.307900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:12.307944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.307972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.307997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:12.308737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:12.308752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.329197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.329271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.330644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.330657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.997 [2024-07-22 11:17:27.331653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.331811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.997 [2024-07-22 11:17:27.331825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:20:24.997 [2024-07-22 11:17:27.332349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.332372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.332408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.332674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.332705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.332869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.332901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.332932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.332975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.332995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.333677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.333789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.333803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.334707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.334744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.334776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.334817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.334850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.334881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:20:24.998 [2024-07-22 11:17:27.334928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.334947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.334991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.335013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.335027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.335046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.335059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:20:24.998 [2024-07-22 11:17:27.335078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:20:24.998 [2024-07-22 11:17:27.335092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:20:24.998 Received shutdown signal, test time was about 31.439635 seconds 01:20:24.998 01:20:24.998 Latency(us) 01:20:24.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:20:24.998 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:20:24.998 Verification LBA range: start 0x0 length 0x4000 01:20:24.998 Nvme0n1 : 31.44 8868.29 34.64 0.00 0.00 14406.65 465.45 4026531.84 01:20:24.998 =================================================================================================================== 01:20:24.998 Total : 8868.29 34.64 0.00 0.00 14406.65 465.45 4026531.84 01:20:24.998 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:20:25.256 rmmod nvme_tcp 01:20:25.256 rmmod nvme_fabrics 01:20:25.256 rmmod nvme_keyring 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 108198 ']' 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 108198 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108198 ']' 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108198 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108198 01:20:25.256 killing process with pid 108198 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108198' 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108198 01:20:25.256 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108198 01:20:25.529 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:20:25.529 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:20:25.529 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:20:25.529 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:20:25.530 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 01:20:25.530 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:25.530 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:20:25.530 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:25.530 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:20:25.530 01:20:25.530 real 0m36.066s 01:20:25.530 user 1m56.979s 01:20:25.530 sys 0m9.315s 01:20:25.530 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:25.530 11:17:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:20:25.530 ************************************ 01:20:25.530 END TEST nvmf_host_multipath_status 01:20:25.530 ************************************ 01:20:25.530 11:17:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:20:25.530 11:17:30 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:20:25.530 11:17:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:20:25.530 11:17:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:25.530 11:17:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:20:25.788 ************************************ 01:20:25.788 START TEST nvmf_discovery_remove_ifc 01:20:25.788 ************************************ 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:20:25.788 * Looking for test storage... 01:20:25.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:25.788 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:20:25.789 Cannot find device "nvmf_tgt_br" 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:20:25.789 Cannot find device "nvmf_tgt_br2" 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:20:25.789 Cannot find device "nvmf_tgt_br" 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:20:25.789 Cannot find device "nvmf_tgt_br2" 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:25.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 01:20:25.789 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:25.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:26.047 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 01:20:26.047 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:20:26.047 11:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:20:26.047 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:20:26.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:26.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 01:20:26.048 01:20:26.048 --- 10.0.0.2 ping statistics --- 01:20:26.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:26.048 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:20:26.048 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:26.048 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 01:20:26.048 01:20:26.048 --- 10.0.0.3 ping statistics --- 01:20:26.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:26.048 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:26.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:26.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 01:20:26.048 01:20:26.048 --- 10.0.0.1 ping statistics --- 01:20:26.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:26.048 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=109542 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 109542 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109542 ']' 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:20:26.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 01:20:26.048 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:26.306 [2024-07-22 11:17:31.263647] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:20:26.306 [2024-07-22 11:17:31.263730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:26.306 [2024-07-22 11:17:31.404567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:26.306 [2024-07-22 11:17:31.464346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:20:26.306 [2024-07-22 11:17:31.464410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:20:26.306 [2024-07-22 11:17:31.464420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:20:26.306 [2024-07-22 11:17:31.464443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:20:26.306 [2024-07-22 11:17:31.464449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:20:26.306 [2024-07-22 11:17:31.464475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:26.565 [2024-07-22 11:17:31.635782] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:20:26.565 [2024-07-22 11:17:31.643883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:20:26.565 null0 01:20:26.565 [2024-07-22 11:17:31.675790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=109583 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 109583 /tmp/host.sock 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109583 ']' 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 01:20:26.565 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 01:20:26.565 11:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:26.565 [2024-07-22 11:17:31.758154] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:20:26.565 [2024-07-22 11:17:31.758243] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109583 ] 01:20:26.836 [2024-07-22 11:17:31.900027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:26.836 [2024-07-22 11:17:31.968259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:27.771 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:27.772 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:27.772 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:20:27.772 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:27.772 11:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:28.704 [2024-07-22 11:17:33.878932] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:20:28.704 [2024-07-22 11:17:33.878969] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:20:28.704 [2024-07-22 11:17:33.878988] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:20:28.962 [2024-07-22 11:17:33.967049] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 01:20:28.962 [2024-07-22 11:17:34.030783] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:20:28.962 [2024-07-22 11:17:34.030852] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:20:28.962 [2024-07-22 11:17:34.030885] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:20:28.962 [2024-07-22 11:17:34.030904] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 01:20:28.962 [2024-07-22 11:17:34.030932] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:28.962 [2024-07-22 11:17:34.037428] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e162c0 was disconnected and freed. delete nvme_qpair. 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:20:28.962 11:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:20:30.335 11:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:20:31.269 11:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:20:32.213 11:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:33.184 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:33.445 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:20:33.445 11:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:34.379 [2024-07-22 11:17:39.458672] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:20:34.379 [2024-07-22 11:17:39.458749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:20:34.379 [2024-07-22 11:17:39.458766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:34.379 [2024-07-22 11:17:39.458778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:20:34.379 [2024-07-22 11:17:39.458787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:34.379 [2024-07-22 11:17:39.458797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:20:34.379 [2024-07-22 11:17:39.458806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:34.379 [2024-07-22 11:17:39.458815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:20:34.379 [2024-07-22 11:17:39.458825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:34.379 [2024-07-22 11:17:39.458835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:20:34.379 [2024-07-22 11:17:39.458843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:34.379 [2024-07-22 11:17:39.458852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddcfa0 is same with the state(5) to be set 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:20:34.379 11:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:34.379 [2024-07-22 11:17:39.468668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddcfa0 (9): Bad file descriptor 01:20:34.379 [2024-07-22 11:17:39.478692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:20:35.313 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:35.313 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:35.313 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:35.313 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:35.313 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:35.313 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:35.313 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:35.572 [2024-07-22 11:17:40.522102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 01:20:35.572 [2024-07-22 11:17:40.522207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddcfa0 with addr=10.0.0.2, port=4420 01:20:35.572 [2024-07-22 11:17:40.522256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddcfa0 is same with the state(5) to be set 01:20:35.572 [2024-07-22 11:17:40.522311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddcfa0 (9): Bad file descriptor 01:20:35.572 [2024-07-22 11:17:40.523118] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 01:20:35.572 [2024-07-22 11:17:40.523194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:20:35.572 [2024-07-22 11:17:40.523222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:20:35.572 [2024-07-22 11:17:40.523247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:20:35.572 [2024-07-22 11:17:40.523287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:20:35.572 [2024-07-22 11:17:40.523312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:20:35.572 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:35.572 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:20:35.572 11:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:36.505 [2024-07-22 11:17:41.523368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:20:36.505 [2024-07-22 11:17:41.523404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:20:36.505 [2024-07-22 11:17:41.523415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:20:36.505 [2024-07-22 11:17:41.523427] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 01:20:36.505 [2024-07-22 11:17:41.523444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:20:36.505 [2024-07-22 11:17:41.523475] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 01:20:36.505 [2024-07-22 11:17:41.523510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:20:36.505 [2024-07-22 11:17:41.523526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:36.505 [2024-07-22 11:17:41.523538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:20:36.505 [2024-07-22 11:17:41.523546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:36.505 [2024-07-22 11:17:41.523557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:20:36.505 [2024-07-22 11:17:41.523572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:36.505 [2024-07-22 11:17:41.523582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:20:36.505 [2024-07-22 11:17:41.523590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:36.505 [2024-07-22 11:17:41.523600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:20:36.505 [2024-07-22 11:17:41.523608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:20:36.505 [2024-07-22 11:17:41.523617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 01:20:36.505 [2024-07-22 11:17:41.524132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddc410 (9): Bad file descriptor 01:20:36.505 [2024-07-22 11:17:41.525144] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:20:36.505 [2024-07-22 11:17:41.525168] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:20:36.505 11:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:20:37.877 11:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:20:38.442 [2024-07-22 11:17:43.530342] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:20:38.442 [2024-07-22 11:17:43.530365] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:20:38.442 [2024-07-22 11:17:43.530383] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:20:38.442 [2024-07-22 11:17:43.616442] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 01:20:38.700 [2024-07-22 11:17:43.672220] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:20:38.700 [2024-07-22 11:17:43.672266] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:20:38.700 [2024-07-22 11:17:43.672291] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:20:38.700 [2024-07-22 11:17:43.672307] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 01:20:38.700 [2024-07-22 11:17:43.672315] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:20:38.700 [2024-07-22 11:17:43.678751] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1dcc5f0 was disconnected and freed. delete nvme_qpair. 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 109583 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109583 ']' 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109583 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109583 01:20:38.700 killing process with pid 109583 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109583' 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109583 01:20:38.700 11:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109583 01:20:38.958 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:20:38.958 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 01:20:38.958 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 01:20:38.958 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:20:38.958 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 01:20:38.958 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 01:20:38.958 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:20:38.958 rmmod nvme_tcp 01:20:38.958 rmmod nvme_fabrics 01:20:39.216 rmmod nvme_keyring 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 109542 ']' 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 109542 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109542 ']' 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109542 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109542 01:20:39.216 killing process with pid 109542 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109542' 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109542 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109542 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:20:39.216 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:39.474 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:20:39.474 01:20:39.474 real 0m13.703s 01:20:39.474 user 0m24.879s 01:20:39.474 sys 0m1.631s 01:20:39.474 ************************************ 01:20:39.474 END TEST nvmf_discovery_remove_ifc 01:20:39.474 ************************************ 01:20:39.474 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:39.474 11:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:20:39.474 11:17:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:20:39.474 11:17:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:20:39.474 11:17:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:20:39.474 11:17:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:39.474 11:17:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:20:39.474 ************************************ 01:20:39.474 START TEST nvmf_identify_kernel_target 01:20:39.474 ************************************ 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:20:39.474 * Looking for test storage... 01:20:39.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:39.474 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:20:39.475 Cannot find device "nvmf_tgt_br" 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:20:39.475 Cannot find device "nvmf_tgt_br2" 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:20:39.475 Cannot find device "nvmf_tgt_br" 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:20:39.475 Cannot find device "nvmf_tgt_br2" 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 01:20:39.475 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:39.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:39.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:20:39.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:39.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 01:20:39.732 01:20:39.732 --- 10.0.0.2 ping statistics --- 01:20:39.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:39.732 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:20:39.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:39.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 01:20:39.732 01:20:39.732 --- 10.0.0.3 ping statistics --- 01:20:39.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:39.732 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:39.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:39.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 01:20:39.732 01:20:39.732 --- 10.0.0.1 ping statistics --- 01:20:39.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:39.732 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:20:39.732 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:20:39.989 11:17:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:20:40.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:20:40.246 Waiting for block devices as requested 01:20:40.246 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:20:40.503 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:20:40.503 No valid GPT data, bailing 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:20:40.503 No valid GPT data, bailing 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:20:40.503 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:20:40.761 No valid GPT data, bailing 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:20:40.761 No valid GPT data, bailing 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -a 10.0.0.1 -t tcp -s 4420 01:20:40.761 01:20:40.761 Discovery Log Number of Records 2, Generation counter 2 01:20:40.761 =====Discovery Log Entry 0====== 01:20:40.761 trtype: tcp 01:20:40.761 adrfam: ipv4 01:20:40.761 subtype: current discovery subsystem 01:20:40.761 treq: not specified, sq flow control disable supported 01:20:40.761 portid: 1 01:20:40.761 trsvcid: 4420 01:20:40.761 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:20:40.761 traddr: 10.0.0.1 01:20:40.761 eflags: none 01:20:40.761 sectype: none 01:20:40.761 =====Discovery Log Entry 1====== 01:20:40.761 trtype: tcp 01:20:40.761 adrfam: ipv4 01:20:40.761 subtype: nvme subsystem 01:20:40.761 treq: not specified, sq flow control disable supported 01:20:40.761 portid: 1 01:20:40.761 trsvcid: 4420 01:20:40.761 subnqn: nqn.2016-06.io.spdk:testnqn 01:20:40.761 traddr: 10.0.0.1 01:20:40.761 eflags: none 01:20:40.761 sectype: none 01:20:40.761 11:17:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:20:40.761 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:20:41.019 ===================================================== 01:20:41.019 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:20:41.019 ===================================================== 01:20:41.019 Controller Capabilities/Features 01:20:41.019 ================================ 01:20:41.019 Vendor ID: 0000 01:20:41.019 Subsystem Vendor ID: 0000 01:20:41.019 Serial Number: c744aaf82bcb7ca63013 01:20:41.019 Model Number: Linux 01:20:41.019 Firmware Version: 6.7.0-68 01:20:41.019 Recommended Arb Burst: 0 01:20:41.019 IEEE OUI Identifier: 00 00 00 01:20:41.019 Multi-path I/O 01:20:41.019 May have multiple subsystem ports: No 01:20:41.019 May have multiple controllers: No 01:20:41.019 Associated with SR-IOV VF: No 01:20:41.019 Max Data Transfer Size: Unlimited 01:20:41.019 Max Number of Namespaces: 0 01:20:41.019 Max Number of I/O Queues: 1024 01:20:41.019 NVMe Specification Version (VS): 1.3 01:20:41.019 NVMe Specification Version (Identify): 1.3 01:20:41.019 Maximum Queue Entries: 1024 01:20:41.019 Contiguous Queues Required: No 01:20:41.019 Arbitration Mechanisms Supported 01:20:41.019 Weighted Round Robin: Not Supported 01:20:41.019 Vendor Specific: Not Supported 01:20:41.019 Reset Timeout: 7500 ms 01:20:41.019 Doorbell Stride: 4 bytes 01:20:41.019 NVM Subsystem Reset: Not Supported 01:20:41.019 Command Sets Supported 01:20:41.019 NVM Command Set: Supported 01:20:41.019 Boot Partition: Not Supported 01:20:41.019 Memory Page Size Minimum: 4096 bytes 01:20:41.019 Memory Page Size Maximum: 4096 bytes 01:20:41.019 Persistent Memory Region: Not Supported 01:20:41.019 Optional Asynchronous Events Supported 01:20:41.019 Namespace Attribute Notices: Not Supported 01:20:41.019 Firmware Activation Notices: Not Supported 01:20:41.019 ANA Change Notices: Not Supported 01:20:41.019 PLE Aggregate Log Change Notices: Not Supported 01:20:41.019 LBA Status Info Alert Notices: Not Supported 01:20:41.019 EGE Aggregate Log Change Notices: Not Supported 01:20:41.019 Normal NVM Subsystem Shutdown event: Not Supported 01:20:41.019 Zone Descriptor Change Notices: Not Supported 01:20:41.019 Discovery Log Change Notices: Supported 01:20:41.019 Controller Attributes 01:20:41.019 128-bit Host Identifier: Not Supported 01:20:41.019 Non-Operational Permissive Mode: Not Supported 01:20:41.019 NVM Sets: Not Supported 01:20:41.019 Read Recovery Levels: Not Supported 01:20:41.019 Endurance Groups: Not Supported 01:20:41.019 Predictable Latency Mode: Not Supported 01:20:41.019 Traffic Based Keep ALive: Not Supported 01:20:41.019 Namespace Granularity: Not Supported 01:20:41.019 SQ Associations: Not Supported 01:20:41.019 UUID List: Not Supported 01:20:41.019 Multi-Domain Subsystem: Not Supported 01:20:41.019 Fixed Capacity Management: Not Supported 01:20:41.019 Variable Capacity Management: Not Supported 01:20:41.019 Delete Endurance Group: Not Supported 01:20:41.019 Delete NVM Set: Not Supported 01:20:41.019 Extended LBA Formats Supported: Not Supported 01:20:41.019 Flexible Data Placement Supported: Not Supported 01:20:41.019 01:20:41.019 Controller Memory Buffer Support 01:20:41.019 ================================ 01:20:41.019 Supported: No 01:20:41.019 01:20:41.019 Persistent Memory Region Support 01:20:41.019 ================================ 01:20:41.019 Supported: No 01:20:41.019 01:20:41.019 Admin Command Set Attributes 01:20:41.019 ============================ 01:20:41.019 Security Send/Receive: Not Supported 01:20:41.019 Format NVM: Not Supported 01:20:41.020 Firmware Activate/Download: Not Supported 01:20:41.020 Namespace Management: Not Supported 01:20:41.020 Device Self-Test: Not Supported 01:20:41.020 Directives: Not Supported 01:20:41.020 NVMe-MI: Not Supported 01:20:41.020 Virtualization Management: Not Supported 01:20:41.020 Doorbell Buffer Config: Not Supported 01:20:41.020 Get LBA Status Capability: Not Supported 01:20:41.020 Command & Feature Lockdown Capability: Not Supported 01:20:41.020 Abort Command Limit: 1 01:20:41.020 Async Event Request Limit: 1 01:20:41.020 Number of Firmware Slots: N/A 01:20:41.020 Firmware Slot 1 Read-Only: N/A 01:20:41.020 Firmware Activation Without Reset: N/A 01:20:41.020 Multiple Update Detection Support: N/A 01:20:41.020 Firmware Update Granularity: No Information Provided 01:20:41.020 Per-Namespace SMART Log: No 01:20:41.020 Asymmetric Namespace Access Log Page: Not Supported 01:20:41.020 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:20:41.020 Command Effects Log Page: Not Supported 01:20:41.020 Get Log Page Extended Data: Supported 01:20:41.020 Telemetry Log Pages: Not Supported 01:20:41.020 Persistent Event Log Pages: Not Supported 01:20:41.020 Supported Log Pages Log Page: May Support 01:20:41.020 Commands Supported & Effects Log Page: Not Supported 01:20:41.020 Feature Identifiers & Effects Log Page:May Support 01:20:41.020 NVMe-MI Commands & Effects Log Page: May Support 01:20:41.020 Data Area 4 for Telemetry Log: Not Supported 01:20:41.020 Error Log Page Entries Supported: 1 01:20:41.020 Keep Alive: Not Supported 01:20:41.020 01:20:41.020 NVM Command Set Attributes 01:20:41.020 ========================== 01:20:41.020 Submission Queue Entry Size 01:20:41.020 Max: 1 01:20:41.020 Min: 1 01:20:41.020 Completion Queue Entry Size 01:20:41.020 Max: 1 01:20:41.020 Min: 1 01:20:41.020 Number of Namespaces: 0 01:20:41.020 Compare Command: Not Supported 01:20:41.020 Write Uncorrectable Command: Not Supported 01:20:41.020 Dataset Management Command: Not Supported 01:20:41.020 Write Zeroes Command: Not Supported 01:20:41.020 Set Features Save Field: Not Supported 01:20:41.020 Reservations: Not Supported 01:20:41.020 Timestamp: Not Supported 01:20:41.020 Copy: Not Supported 01:20:41.020 Volatile Write Cache: Not Present 01:20:41.020 Atomic Write Unit (Normal): 1 01:20:41.020 Atomic Write Unit (PFail): 1 01:20:41.020 Atomic Compare & Write Unit: 1 01:20:41.020 Fused Compare & Write: Not Supported 01:20:41.020 Scatter-Gather List 01:20:41.020 SGL Command Set: Supported 01:20:41.020 SGL Keyed: Not Supported 01:20:41.020 SGL Bit Bucket Descriptor: Not Supported 01:20:41.020 SGL Metadata Pointer: Not Supported 01:20:41.020 Oversized SGL: Not Supported 01:20:41.020 SGL Metadata Address: Not Supported 01:20:41.020 SGL Offset: Supported 01:20:41.020 Transport SGL Data Block: Not Supported 01:20:41.020 Replay Protected Memory Block: Not Supported 01:20:41.020 01:20:41.020 Firmware Slot Information 01:20:41.020 ========================= 01:20:41.020 Active slot: 0 01:20:41.020 01:20:41.020 01:20:41.020 Error Log 01:20:41.020 ========= 01:20:41.020 01:20:41.020 Active Namespaces 01:20:41.020 ================= 01:20:41.020 Discovery Log Page 01:20:41.020 ================== 01:20:41.020 Generation Counter: 2 01:20:41.020 Number of Records: 2 01:20:41.020 Record Format: 0 01:20:41.020 01:20:41.020 Discovery Log Entry 0 01:20:41.020 ---------------------- 01:20:41.020 Transport Type: 3 (TCP) 01:20:41.020 Address Family: 1 (IPv4) 01:20:41.020 Subsystem Type: 3 (Current Discovery Subsystem) 01:20:41.020 Entry Flags: 01:20:41.020 Duplicate Returned Information: 0 01:20:41.020 Explicit Persistent Connection Support for Discovery: 0 01:20:41.020 Transport Requirements: 01:20:41.020 Secure Channel: Not Specified 01:20:41.020 Port ID: 1 (0x0001) 01:20:41.020 Controller ID: 65535 (0xffff) 01:20:41.020 Admin Max SQ Size: 32 01:20:41.020 Transport Service Identifier: 4420 01:20:41.020 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:20:41.020 Transport Address: 10.0.0.1 01:20:41.020 Discovery Log Entry 1 01:20:41.020 ---------------------- 01:20:41.020 Transport Type: 3 (TCP) 01:20:41.020 Address Family: 1 (IPv4) 01:20:41.020 Subsystem Type: 2 (NVM Subsystem) 01:20:41.020 Entry Flags: 01:20:41.020 Duplicate Returned Information: 0 01:20:41.020 Explicit Persistent Connection Support for Discovery: 0 01:20:41.020 Transport Requirements: 01:20:41.020 Secure Channel: Not Specified 01:20:41.020 Port ID: 1 (0x0001) 01:20:41.020 Controller ID: 65535 (0xffff) 01:20:41.020 Admin Max SQ Size: 32 01:20:41.020 Transport Service Identifier: 4420 01:20:41.020 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:20:41.020 Transport Address: 10.0.0.1 01:20:41.020 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:20:41.279 get_feature(0x01) failed 01:20:41.279 get_feature(0x02) failed 01:20:41.279 get_feature(0x04) failed 01:20:41.279 ===================================================== 01:20:41.279 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:20:41.279 ===================================================== 01:20:41.279 Controller Capabilities/Features 01:20:41.279 ================================ 01:20:41.279 Vendor ID: 0000 01:20:41.279 Subsystem Vendor ID: 0000 01:20:41.279 Serial Number: fa79ead42b5330025de0 01:20:41.279 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:20:41.279 Firmware Version: 6.7.0-68 01:20:41.279 Recommended Arb Burst: 6 01:20:41.279 IEEE OUI Identifier: 00 00 00 01:20:41.279 Multi-path I/O 01:20:41.279 May have multiple subsystem ports: Yes 01:20:41.279 May have multiple controllers: Yes 01:20:41.279 Associated with SR-IOV VF: No 01:20:41.279 Max Data Transfer Size: Unlimited 01:20:41.279 Max Number of Namespaces: 1024 01:20:41.279 Max Number of I/O Queues: 128 01:20:41.279 NVMe Specification Version (VS): 1.3 01:20:41.279 NVMe Specification Version (Identify): 1.3 01:20:41.279 Maximum Queue Entries: 1024 01:20:41.279 Contiguous Queues Required: No 01:20:41.279 Arbitration Mechanisms Supported 01:20:41.279 Weighted Round Robin: Not Supported 01:20:41.279 Vendor Specific: Not Supported 01:20:41.279 Reset Timeout: 7500 ms 01:20:41.279 Doorbell Stride: 4 bytes 01:20:41.279 NVM Subsystem Reset: Not Supported 01:20:41.279 Command Sets Supported 01:20:41.279 NVM Command Set: Supported 01:20:41.279 Boot Partition: Not Supported 01:20:41.279 Memory Page Size Minimum: 4096 bytes 01:20:41.279 Memory Page Size Maximum: 4096 bytes 01:20:41.279 Persistent Memory Region: Not Supported 01:20:41.279 Optional Asynchronous Events Supported 01:20:41.279 Namespace Attribute Notices: Supported 01:20:41.279 Firmware Activation Notices: Not Supported 01:20:41.279 ANA Change Notices: Supported 01:20:41.279 PLE Aggregate Log Change Notices: Not Supported 01:20:41.279 LBA Status Info Alert Notices: Not Supported 01:20:41.279 EGE Aggregate Log Change Notices: Not Supported 01:20:41.279 Normal NVM Subsystem Shutdown event: Not Supported 01:20:41.279 Zone Descriptor Change Notices: Not Supported 01:20:41.279 Discovery Log Change Notices: Not Supported 01:20:41.279 Controller Attributes 01:20:41.279 128-bit Host Identifier: Supported 01:20:41.279 Non-Operational Permissive Mode: Not Supported 01:20:41.279 NVM Sets: Not Supported 01:20:41.279 Read Recovery Levels: Not Supported 01:20:41.279 Endurance Groups: Not Supported 01:20:41.279 Predictable Latency Mode: Not Supported 01:20:41.279 Traffic Based Keep ALive: Supported 01:20:41.279 Namespace Granularity: Not Supported 01:20:41.279 SQ Associations: Not Supported 01:20:41.279 UUID List: Not Supported 01:20:41.279 Multi-Domain Subsystem: Not Supported 01:20:41.279 Fixed Capacity Management: Not Supported 01:20:41.279 Variable Capacity Management: Not Supported 01:20:41.279 Delete Endurance Group: Not Supported 01:20:41.279 Delete NVM Set: Not Supported 01:20:41.279 Extended LBA Formats Supported: Not Supported 01:20:41.279 Flexible Data Placement Supported: Not Supported 01:20:41.279 01:20:41.279 Controller Memory Buffer Support 01:20:41.279 ================================ 01:20:41.279 Supported: No 01:20:41.279 01:20:41.279 Persistent Memory Region Support 01:20:41.279 ================================ 01:20:41.279 Supported: No 01:20:41.279 01:20:41.279 Admin Command Set Attributes 01:20:41.279 ============================ 01:20:41.279 Security Send/Receive: Not Supported 01:20:41.279 Format NVM: Not Supported 01:20:41.279 Firmware Activate/Download: Not Supported 01:20:41.279 Namespace Management: Not Supported 01:20:41.279 Device Self-Test: Not Supported 01:20:41.279 Directives: Not Supported 01:20:41.279 NVMe-MI: Not Supported 01:20:41.279 Virtualization Management: Not Supported 01:20:41.279 Doorbell Buffer Config: Not Supported 01:20:41.279 Get LBA Status Capability: Not Supported 01:20:41.279 Command & Feature Lockdown Capability: Not Supported 01:20:41.279 Abort Command Limit: 4 01:20:41.279 Async Event Request Limit: 4 01:20:41.279 Number of Firmware Slots: N/A 01:20:41.279 Firmware Slot 1 Read-Only: N/A 01:20:41.279 Firmware Activation Without Reset: N/A 01:20:41.279 Multiple Update Detection Support: N/A 01:20:41.279 Firmware Update Granularity: No Information Provided 01:20:41.279 Per-Namespace SMART Log: Yes 01:20:41.279 Asymmetric Namespace Access Log Page: Supported 01:20:41.279 ANA Transition Time : 10 sec 01:20:41.279 01:20:41.279 Asymmetric Namespace Access Capabilities 01:20:41.279 ANA Optimized State : Supported 01:20:41.279 ANA Non-Optimized State : Supported 01:20:41.279 ANA Inaccessible State : Supported 01:20:41.279 ANA Persistent Loss State : Supported 01:20:41.279 ANA Change State : Supported 01:20:41.279 ANAGRPID is not changed : No 01:20:41.279 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:20:41.279 01:20:41.279 ANA Group Identifier Maximum : 128 01:20:41.279 Number of ANA Group Identifiers : 128 01:20:41.279 Max Number of Allowed Namespaces : 1024 01:20:41.279 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:20:41.279 Command Effects Log Page: Supported 01:20:41.279 Get Log Page Extended Data: Supported 01:20:41.279 Telemetry Log Pages: Not Supported 01:20:41.279 Persistent Event Log Pages: Not Supported 01:20:41.279 Supported Log Pages Log Page: May Support 01:20:41.279 Commands Supported & Effects Log Page: Not Supported 01:20:41.279 Feature Identifiers & Effects Log Page:May Support 01:20:41.279 NVMe-MI Commands & Effects Log Page: May Support 01:20:41.279 Data Area 4 for Telemetry Log: Not Supported 01:20:41.279 Error Log Page Entries Supported: 128 01:20:41.279 Keep Alive: Supported 01:20:41.279 Keep Alive Granularity: 1000 ms 01:20:41.279 01:20:41.279 NVM Command Set Attributes 01:20:41.279 ========================== 01:20:41.279 Submission Queue Entry Size 01:20:41.279 Max: 64 01:20:41.279 Min: 64 01:20:41.279 Completion Queue Entry Size 01:20:41.279 Max: 16 01:20:41.279 Min: 16 01:20:41.279 Number of Namespaces: 1024 01:20:41.279 Compare Command: Not Supported 01:20:41.279 Write Uncorrectable Command: Not Supported 01:20:41.279 Dataset Management Command: Supported 01:20:41.279 Write Zeroes Command: Supported 01:20:41.279 Set Features Save Field: Not Supported 01:20:41.279 Reservations: Not Supported 01:20:41.279 Timestamp: Not Supported 01:20:41.279 Copy: Not Supported 01:20:41.279 Volatile Write Cache: Present 01:20:41.279 Atomic Write Unit (Normal): 1 01:20:41.279 Atomic Write Unit (PFail): 1 01:20:41.279 Atomic Compare & Write Unit: 1 01:20:41.279 Fused Compare & Write: Not Supported 01:20:41.279 Scatter-Gather List 01:20:41.279 SGL Command Set: Supported 01:20:41.279 SGL Keyed: Not Supported 01:20:41.279 SGL Bit Bucket Descriptor: Not Supported 01:20:41.279 SGL Metadata Pointer: Not Supported 01:20:41.279 Oversized SGL: Not Supported 01:20:41.279 SGL Metadata Address: Not Supported 01:20:41.279 SGL Offset: Supported 01:20:41.279 Transport SGL Data Block: Not Supported 01:20:41.279 Replay Protected Memory Block: Not Supported 01:20:41.279 01:20:41.279 Firmware Slot Information 01:20:41.279 ========================= 01:20:41.279 Active slot: 0 01:20:41.279 01:20:41.279 Asymmetric Namespace Access 01:20:41.279 =========================== 01:20:41.279 Change Count : 0 01:20:41.279 Number of ANA Group Descriptors : 1 01:20:41.279 ANA Group Descriptor : 0 01:20:41.279 ANA Group ID : 1 01:20:41.279 Number of NSID Values : 1 01:20:41.279 Change Count : 0 01:20:41.279 ANA State : 1 01:20:41.279 Namespace Identifier : 1 01:20:41.279 01:20:41.279 Commands Supported and Effects 01:20:41.279 ============================== 01:20:41.279 Admin Commands 01:20:41.279 -------------- 01:20:41.279 Get Log Page (02h): Supported 01:20:41.279 Identify (06h): Supported 01:20:41.279 Abort (08h): Supported 01:20:41.279 Set Features (09h): Supported 01:20:41.279 Get Features (0Ah): Supported 01:20:41.279 Asynchronous Event Request (0Ch): Supported 01:20:41.279 Keep Alive (18h): Supported 01:20:41.279 I/O Commands 01:20:41.279 ------------ 01:20:41.279 Flush (00h): Supported 01:20:41.279 Write (01h): Supported LBA-Change 01:20:41.279 Read (02h): Supported 01:20:41.279 Write Zeroes (08h): Supported LBA-Change 01:20:41.279 Dataset Management (09h): Supported 01:20:41.279 01:20:41.279 Error Log 01:20:41.279 ========= 01:20:41.279 Entry: 0 01:20:41.279 Error Count: 0x3 01:20:41.279 Submission Queue Id: 0x0 01:20:41.279 Command Id: 0x5 01:20:41.279 Phase Bit: 0 01:20:41.279 Status Code: 0x2 01:20:41.279 Status Code Type: 0x0 01:20:41.279 Do Not Retry: 1 01:20:41.279 Error Location: 0x28 01:20:41.279 LBA: 0x0 01:20:41.279 Namespace: 0x0 01:20:41.279 Vendor Log Page: 0x0 01:20:41.279 ----------- 01:20:41.279 Entry: 1 01:20:41.279 Error Count: 0x2 01:20:41.279 Submission Queue Id: 0x0 01:20:41.279 Command Id: 0x5 01:20:41.279 Phase Bit: 0 01:20:41.279 Status Code: 0x2 01:20:41.279 Status Code Type: 0x0 01:20:41.279 Do Not Retry: 1 01:20:41.279 Error Location: 0x28 01:20:41.280 LBA: 0x0 01:20:41.280 Namespace: 0x0 01:20:41.280 Vendor Log Page: 0x0 01:20:41.280 ----------- 01:20:41.280 Entry: 2 01:20:41.280 Error Count: 0x1 01:20:41.280 Submission Queue Id: 0x0 01:20:41.280 Command Id: 0x4 01:20:41.280 Phase Bit: 0 01:20:41.280 Status Code: 0x2 01:20:41.280 Status Code Type: 0x0 01:20:41.280 Do Not Retry: 1 01:20:41.280 Error Location: 0x28 01:20:41.280 LBA: 0x0 01:20:41.280 Namespace: 0x0 01:20:41.280 Vendor Log Page: 0x0 01:20:41.280 01:20:41.280 Number of Queues 01:20:41.280 ================ 01:20:41.280 Number of I/O Submission Queues: 128 01:20:41.280 Number of I/O Completion Queues: 128 01:20:41.280 01:20:41.280 ZNS Specific Controller Data 01:20:41.280 ============================ 01:20:41.280 Zone Append Size Limit: 0 01:20:41.280 01:20:41.280 01:20:41.280 Active Namespaces 01:20:41.280 ================= 01:20:41.280 get_feature(0x05) failed 01:20:41.280 Namespace ID:1 01:20:41.280 Command Set Identifier: NVM (00h) 01:20:41.280 Deallocate: Supported 01:20:41.280 Deallocated/Unwritten Error: Not Supported 01:20:41.280 Deallocated Read Value: Unknown 01:20:41.280 Deallocate in Write Zeroes: Not Supported 01:20:41.280 Deallocated Guard Field: 0xFFFF 01:20:41.280 Flush: Supported 01:20:41.280 Reservation: Not Supported 01:20:41.280 Namespace Sharing Capabilities: Multiple Controllers 01:20:41.280 Size (in LBAs): 1310720 (5GiB) 01:20:41.280 Capacity (in LBAs): 1310720 (5GiB) 01:20:41.280 Utilization (in LBAs): 1310720 (5GiB) 01:20:41.280 UUID: 95ca0dd6-69bd-4cdb-8b43-331f6ede051d 01:20:41.280 Thin Provisioning: Not Supported 01:20:41.280 Per-NS Atomic Units: Yes 01:20:41.280 Atomic Boundary Size (Normal): 0 01:20:41.280 Atomic Boundary Size (PFail): 0 01:20:41.280 Atomic Boundary Offset: 0 01:20:41.280 NGUID/EUI64 Never Reused: No 01:20:41.280 ANA group ID: 1 01:20:41.280 Namespace Write Protected: No 01:20:41.280 Number of LBA Formats: 1 01:20:41.280 Current LBA Format: LBA Format #00 01:20:41.280 LBA Format #00: Data Size: 4096 Metadata Size: 0 01:20:41.280 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:20:41.280 rmmod nvme_tcp 01:20:41.280 rmmod nvme_fabrics 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:20:41.280 11:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:20:42.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:20:42.212 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:20:42.212 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:20:42.212 ************************************ 01:20:42.212 END TEST nvmf_identify_kernel_target 01:20:42.212 ************************************ 01:20:42.212 01:20:42.212 real 0m2.816s 01:20:42.212 user 0m0.965s 01:20:42.212 sys 0m1.350s 01:20:42.212 11:17:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 01:20:42.212 11:17:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:20:42.212 11:17:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:20:42.212 11:17:47 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:20:42.212 11:17:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:20:42.212 11:17:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:20:42.212 11:17:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:20:42.212 ************************************ 01:20:42.212 START TEST nvmf_auth_host 01:20:42.212 ************************************ 01:20:42.212 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:20:42.470 * Looking for test storage... 01:20:42.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:42.470 11:17:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:20:42.471 Cannot find device "nvmf_tgt_br" 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:20:42.471 Cannot find device "nvmf_tgt_br2" 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:20:42.471 Cannot find device "nvmf_tgt_br" 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:20:42.471 Cannot find device "nvmf_tgt_br2" 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:42.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:42.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:42.471 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:20:42.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:42.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 01:20:42.729 01:20:42.729 --- 10.0.0.2 ping statistics --- 01:20:42.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:42.729 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:20:42.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:42.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 01:20:42.729 01:20:42.729 --- 10.0.0.3 ping statistics --- 01:20:42.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:42.729 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:42.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:42.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:20:42.729 01:20:42.729 --- 10.0.0.1 ping statistics --- 01:20:42.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:42.729 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=110471 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 110471 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110471 ']' 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:20:42.729 11:17:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=47fde8e4a5f2356826d0479124426c2a 01:20:44.103 11:17:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gLv 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 47fde8e4a5f2356826d0479124426c2a 0 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 47fde8e4a5f2356826d0479124426c2a 0 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=47fde8e4a5f2356826d0479124426c2a 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gLv 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gLv 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gLv 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=82278000cc675f50892cf132dec6562ee5de07dd7faf785ae739ffadd068eae6 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9A9 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 82278000cc675f50892cf132dec6562ee5de07dd7faf785ae739ffadd068eae6 3 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 82278000cc675f50892cf132dec6562ee5de07dd7faf785ae739ffadd068eae6 3 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=82278000cc675f50892cf132dec6562ee5de07dd7faf785ae739ffadd068eae6 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9A9 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9A9 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.9A9 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5cfb8278ee1f68c1c955066ddf82d59b77aac03c8f313791 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.g3K 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5cfb8278ee1f68c1c955066ddf82d59b77aac03c8f313791 0 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5cfb8278ee1f68c1c955066ddf82d59b77aac03c8f313791 0 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5cfb8278ee1f68c1c955066ddf82d59b77aac03c8f313791 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.g3K 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.g3K 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.g3K 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d72aec2771875351efb39f23590b2d28fc079cc96349f37 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OGd 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d72aec2771875351efb39f23590b2d28fc079cc96349f37 2 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d72aec2771875351efb39f23590b2d28fc079cc96349f37 2 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d72aec2771875351efb39f23590b2d28fc079cc96349f37 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OGd 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OGd 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OGd 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=125bcbbd460b8b9fc8b55b826508f974 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jKr 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 125bcbbd460b8b9fc8b55b826508f974 1 01:20:44.103 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 125bcbbd460b8b9fc8b55b826508f974 1 01:20:44.104 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.104 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.104 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=125bcbbd460b8b9fc8b55b826508f974 01:20:44.104 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 01:20:44.104 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jKr 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jKr 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jKr 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=29e863fd44c1d83f168902bd0d7a54af 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tSb 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 29e863fd44c1d83f168902bd0d7a54af 1 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 29e863fd44c1d83f168902bd0d7a54af 1 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=29e863fd44c1d83f168902bd0d7a54af 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tSb 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tSb 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tSb 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8ecf481f7f8b6806d94cb4c1afafcb65e3dea08817f693dd 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lPM 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8ecf481f7f8b6806d94cb4c1afafcb65e3dea08817f693dd 2 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8ecf481f7f8b6806d94cb4c1afafcb65e3dea08817f693dd 2 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8ecf481f7f8b6806d94cb4c1afafcb65e3dea08817f693dd 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lPM 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lPM 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lPM 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c6672855ebeafa87e9dad89f1e413846 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ePZ 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c6672855ebeafa87e9dad89f1e413846 0 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c6672855ebeafa87e9dad89f1e413846 0 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c6672855ebeafa87e9dad89f1e413846 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ePZ 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ePZ 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ePZ 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=02b6c5b02fef961e37ed50c843a3193e503bc1965a0c21fea6a6375d1b61cc7c 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.w6z 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 02b6c5b02fef961e37ed50c843a3193e503bc1965a0c21fea6a6375d1b61cc7c 3 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 02b6c5b02fef961e37ed50c843a3193e503bc1965a0c21fea6a6375d1b61cc7c 3 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=02b6c5b02fef961e37ed50c843a3193e503bc1965a0c21fea6a6375d1b61cc7c 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 01:20:44.361 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.w6z 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.w6z 01:20:44.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.w6z 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 110471 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110471 ']' 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:44.618 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 01:20:44.619 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:44.619 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 01:20:44.619 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gLv 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.9A9 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9A9 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.g3K 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OGd ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OGd 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jKr 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tSb ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tSb 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lPM 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ePZ ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ePZ 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.w6z 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:20:44.877 11:17:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:20:45.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:20:45.135 Waiting for block devices as requested 01:20:45.392 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:20:45.392 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:20:45.957 No valid GPT data, bailing 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:20:45.957 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:20:45.958 No valid GPT data, bailing 01:20:45.958 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:20:46.216 No valid GPT data, bailing 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:20:46.216 No valid GPT data, bailing 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -a 10.0.0.1 -t tcp -s 4420 01:20:46.216 01:20:46.216 Discovery Log Number of Records 2, Generation counter 2 01:20:46.216 =====Discovery Log Entry 0====== 01:20:46.216 trtype: tcp 01:20:46.216 adrfam: ipv4 01:20:46.216 subtype: current discovery subsystem 01:20:46.216 treq: not specified, sq flow control disable supported 01:20:46.216 portid: 1 01:20:46.216 trsvcid: 4420 01:20:46.216 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:20:46.216 traddr: 10.0.0.1 01:20:46.216 eflags: none 01:20:46.216 sectype: none 01:20:46.216 =====Discovery Log Entry 1====== 01:20:46.216 trtype: tcp 01:20:46.216 adrfam: ipv4 01:20:46.216 subtype: nvme subsystem 01:20:46.216 treq: not specified, sq flow control disable supported 01:20:46.216 portid: 1 01:20:46.216 trsvcid: 4420 01:20:46.216 subnqn: nqn.2024-02.io.spdk:cnode0 01:20:46.216 traddr: 10.0.0.1 01:20:46.216 eflags: none 01:20:46.216 sectype: none 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:46.216 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.475 nvme0n1 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.475 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.733 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:20:46.733 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:46.733 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.734 nvme0n1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.734 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.992 nvme0n1 01:20:46.992 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.992 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:46.992 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.992 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.992 11:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:46.992 11:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:46.992 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.993 nvme0n1 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:46.993 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.250 nvme0n1 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.250 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.507 nvme0n1 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:47.507 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:47.763 11:17:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.020 nvme0n1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.020 nvme0n1 01:20:48.020 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.278 nvme0n1 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.278 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.536 nvme0n1 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:48.536 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.537 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.793 nvme0n1 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:48.793 11:17:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.359 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.616 nvme0n1 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.616 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.874 nvme0n1 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:49.874 11:17:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.132 nvme0n1 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.132 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.390 nvme0n1 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:50.390 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.391 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.649 nvme0n1 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:50.649 11:17:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.023 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.280 nvme0n1 01:20:52.280 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.280 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:52.280 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:52.280 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.280 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.281 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.281 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:52.281 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:52.281 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.281 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.538 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.796 nvme0n1 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:52.796 11:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.054 nvme0n1 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:53.054 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.055 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.620 nvme0n1 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.620 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.877 nvme0n1 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:53.877 11:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:53.877 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:54.442 nvme0n1 01:20:54.442 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:54.442 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:54.443 11:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.008 nvme0n1 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:55.008 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.020 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.585 nvme0n1 01:20:55.585 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.585 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:55.585 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:55.586 11:18:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.151 nvme0n1 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.151 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.724 nvme0n1 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.724 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.982 11:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.982 nvme0n1 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:56.982 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:56.983 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.241 nvme0n1 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.241 nvme0n1 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.241 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:57.499 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.500 nvme0n1 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.500 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.758 nvme0n1 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:57.758 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.759 nvme0n1 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:57.759 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.017 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:58.017 11:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:58.017 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.017 11:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.017 nvme0n1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.017 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.275 nvme0n1 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.275 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.533 nvme0n1 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.533 nvme0n1 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:58.533 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:58.793 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.794 nvme0n1 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:58.794 11:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.061 nvme0n1 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:59.061 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.341 nvme0n1 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:20:59.341 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.342 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.617 nvme0n1 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.617 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.875 nvme0n1 01:20:59.875 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.875 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:20:59.875 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:20:59.875 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.876 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.876 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.876 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:59.876 11:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:20:59.876 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.876 11:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:20:59.876 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.134 nvme0n1 01:21:00.134 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.134 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:00.134 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:00.134 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.134 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.134 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.391 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.650 nvme0n1 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.650 11:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.908 nvme0n1 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:00.908 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.166 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.424 nvme0n1 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.424 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.683 nvme0n1 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.683 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:01.941 11:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:02.199 nvme0n1 01:21:02.199 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:02.199 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:02.199 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:02.199 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:02.199 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:02.458 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:02.459 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:02.459 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:02.459 11:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:02.459 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:02.459 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:02.459 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:02.774 nvme0n1 01:21:02.774 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:02.774 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:02.774 11:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:02.774 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:02.774 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:02.774 11:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:03.032 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:03.598 nvme0n1 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:03.598 11:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.166 nvme0n1 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.166 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.733 nvme0n1 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.733 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.734 nvme0n1 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.734 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.993 11:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.993 nvme0n1 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:04.993 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.251 nvme0n1 01:21:05.251 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.251 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:05.251 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.251 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.251 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.252 nvme0n1 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.252 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.511 nvme0n1 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.511 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.770 nvme0n1 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:05.770 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.029 nvme0n1 01:21:06.029 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.029 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:06.029 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.029 11:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:06.029 11:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.029 nvme0n1 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.029 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.288 nvme0n1 01:21:06.288 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.289 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.548 nvme0n1 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.548 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.806 nvme0n1 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:06.806 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:06.807 11:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.065 nvme0n1 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.065 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.323 nvme0n1 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.323 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.324 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.582 nvme0n1 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.582 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.841 nvme0n1 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:07.841 11:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.099 nvme0n1 01:21:08.099 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.099 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:08.099 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:08.099 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.099 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.099 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:08.357 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.358 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.616 nvme0n1 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.616 11:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.873 nvme0n1 01:21:08.873 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:08.873 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:08.873 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:08.873 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:08.873 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.131 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.390 nvme0n1 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.390 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.956 nvme0n1 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdmZGU4ZTRhNWYyMzU2ODI2ZDA0NzkxMjQ0MjZjMmF4vOvz: 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: ]] 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODIyNzgwMDBjYzY3NWY1MDg5MmNmMTMyZGVjNjU2MmVlNWRlMDdkZDdmYWY3ODVhZTczOWZmYWRkMDY4ZWFlNhG2ves=: 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:09.956 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:09.957 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:09.957 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:09.957 11:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:09.957 11:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:09.957 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:09.957 11:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:10.524 nvme0n1 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:10.524 11:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.090 nvme0n1 01:21:11.090 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.090 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:11.090 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:11.090 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.090 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.090 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.348 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTI1YmNiYmQ0NjBiOGI5ZmM4YjU1YjgyNjUwOGY5NzR0y7CB: 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: ]] 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjllODYzZmQ0NGMxZDgzZjE2ODkwMmJkMGQ3YTU0YWbgA2wA: 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.349 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.914 nvme0n1 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVjZjQ4MWY3ZjhiNjgwNmQ5NGNiNGMxYWZhZmNiNjVlM2RlYTA4ODE3ZjY5M2Rkm3nScg==: 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY2NzI4NTVlYmVhZmE4N2U5ZGFkODlmMWU0MTM4NDYe0+Vt: 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:11.914 11:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:12.480 nvme0n1 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJiNmM1YjAyZmVmOTYxZTM3ZWQ1MGM4NDNhMzE5M2U1MDNiYzE5NjVhMGMyMWZlYTZhNjM3NWQxYjYxY2M3YyL8Heg=: 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:12.480 11:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.047 nvme0n1 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWNmYjgyNzhlZTFmNjhjMWM5NTUwNjZkZGY4MmQ1OWI3N2FhYzAzYzhmMzEzNzkxbcgW2g==: 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmFlYzI3NzE4NzUzNTFlZmIzOWYyMzU5MGIyZDI4ZmMwNzljYzk2MzQ5ZjM3zmqB7w==: 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.047 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.047 2024/07/22 11:18:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:21:13.047 request: 01:21:13.047 { 01:21:13.047 "method": "bdev_nvme_attach_controller", 01:21:13.047 "params": { 01:21:13.047 "name": "nvme0", 01:21:13.047 "trtype": "tcp", 01:21:13.047 "traddr": "10.0.0.1", 01:21:13.047 "adrfam": "ipv4", 01:21:13.047 "trsvcid": "4420", 01:21:13.048 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:21:13.048 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:21:13.048 "prchk_reftag": false, 01:21:13.048 "prchk_guard": false, 01:21:13.048 "hdgst": false, 01:21:13.048 "ddgst": false 01:21:13.048 } 01:21:13.048 } 01:21:13.048 Got JSON-RPC error response 01:21:13.048 GoRPCClient: error on JSON-RPC call 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.048 2024/07/22 11:18:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:21:13.048 request: 01:21:13.048 { 01:21:13.048 "method": "bdev_nvme_attach_controller", 01:21:13.048 "params": { 01:21:13.048 "name": "nvme0", 01:21:13.048 "trtype": "tcp", 01:21:13.048 "traddr": "10.0.0.1", 01:21:13.048 "adrfam": "ipv4", 01:21:13.048 "trsvcid": "4420", 01:21:13.048 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:21:13.048 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:21:13.048 "prchk_reftag": false, 01:21:13.048 "prchk_guard": false, 01:21:13.048 "hdgst": false, 01:21:13.048 "ddgst": false, 01:21:13.048 "dhchap_key": "key2" 01:21:13.048 } 01:21:13.048 } 01:21:13.048 Got JSON-RPC error response 01:21:13.048 GoRPCClient: error on JSON-RPC call 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.048 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:13.307 2024/07/22 11:18:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:21:13.307 request: 01:21:13.307 { 01:21:13.307 "method": "bdev_nvme_attach_controller", 01:21:13.307 "params": { 01:21:13.307 "name": "nvme0", 01:21:13.307 "trtype": "tcp", 01:21:13.307 "traddr": "10.0.0.1", 01:21:13.307 "adrfam": "ipv4", 01:21:13.307 "trsvcid": "4420", 01:21:13.307 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:21:13.307 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:21:13.307 "prchk_reftag": false, 01:21:13.307 "prchk_guard": false, 01:21:13.307 "hdgst": false, 01:21:13.307 "ddgst": false, 01:21:13.307 "dhchap_key": "key1", 01:21:13.307 "dhchap_ctrlr_key": "ckey2" 01:21:13.307 } 01:21:13.307 } 01:21:13.307 Got JSON-RPC error response 01:21:13.307 GoRPCClient: error on JSON-RPC call 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:21:13.307 rmmod nvme_tcp 01:21:13.307 rmmod nvme_fabrics 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 110471 ']' 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 110471 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 110471 ']' 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 110471 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110471 01:21:13.307 killing process with pid 110471 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110471' 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 110471 01:21:13.307 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 110471 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:21:13.566 11:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:21:14.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:14.501 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:21:14.501 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:21:14.501 11:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gLv /tmp/spdk.key-null.g3K /tmp/spdk.key-sha256.jKr /tmp/spdk.key-sha384.lPM /tmp/spdk.key-sha512.w6z /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 01:21:14.501 11:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:21:14.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:21:14.761 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:21:14.761 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:21:15.020 ************************************ 01:21:15.020 END TEST nvmf_auth_host 01:21:15.020 ************************************ 01:21:15.020 01:21:15.020 real 0m32.603s 01:21:15.020 user 0m29.790s 01:21:15.020 sys 0m3.672s 01:21:15.020 11:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:15.020 11:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:21:15.020 11:18:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:21:15.020 11:18:20 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 01:21:15.020 11:18:20 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:21:15.020 11:18:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:21:15.020 11:18:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:21:15.020 11:18:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:21:15.020 ************************************ 01:21:15.020 START TEST nvmf_digest 01:21:15.020 ************************************ 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:21:15.020 * Looking for test storage... 01:21:15.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:15.020 11:18:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:21:15.021 Cannot find device "nvmf_tgt_br" 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:21:15.021 Cannot find device "nvmf_tgt_br2" 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:21:15.021 Cannot find device "nvmf_tgt_br" 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 01:21:15.021 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:21:15.021 Cannot find device "nvmf_tgt_br2" 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:15.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:15.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:21:15.279 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:21:15.280 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:15.280 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:15.280 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:15.280 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:15.280 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:21:15.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:15.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 01:21:15.280 01:21:15.280 --- 10.0.0.2 ping statistics --- 01:21:15.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:15.280 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:21:15.280 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:21:15.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:15.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 01:21:15.280 01:21:15.280 --- 10.0.0.3 ping statistics --- 01:21:15.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:15.280 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:21:15.280 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:15.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:15.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:21:15.538 01:21:15.538 --- 10.0.0.1 ping statistics --- 01:21:15.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:15.538 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:21:15.538 ************************************ 01:21:15.538 START TEST nvmf_digest_clean 01:21:15.538 ************************************ 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:15.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=112033 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 112033 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112033 ']' 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:15.538 11:18:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:15.538 [2024-07-22 11:18:20.588613] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:15.539 [2024-07-22 11:18:20.588703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:15.539 [2024-07-22 11:18:20.734243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:15.797 [2024-07-22 11:18:20.813555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:15.797 [2024-07-22 11:18:20.813639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:15.797 [2024-07-22 11:18:20.813663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:15.797 [2024-07-22 11:18:20.813674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:15.797 [2024-07-22 11:18:20.813683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:15.797 [2024-07-22 11:18:20.813716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:16.364 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:16.622 null0 01:21:16.622 [2024-07-22 11:18:21.626612] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:16.622 [2024-07-22 11:18:21.650765] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112089 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112089 /var/tmp/bperf.sock 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112089 ']' 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:16.622 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:16.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:16.623 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:16.623 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:16.623 [2024-07-22 11:18:21.712160] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:16.623 [2024-07-22 11:18:21.712402] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112089 ] 01:21:16.880 [2024-07-22 11:18:21.855114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:16.880 [2024-07-22 11:18:21.920976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:16.880 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:16.880 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:21:16.880 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:21:16.880 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:21:16.880 11:18:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:21:17.156 11:18:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:17.156 11:18:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:17.720 nvme0n1 01:21:17.720 11:18:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:21:17.720 11:18:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:17.720 Running I/O for 2 seconds... 01:21:19.617 01:21:19.617 Latency(us) 01:21:19.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:19.617 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:21:19.617 nvme0n1 : 2.00 23153.99 90.45 0.00 0.00 5521.86 2889.54 14775.39 01:21:19.617 =================================================================================================================== 01:21:19.617 Total : 23153.99 90.45 0.00 0.00 5521.86 2889.54 14775.39 01:21:19.617 0 01:21:19.617 11:18:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:21:19.617 11:18:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:21:19.617 11:18:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:21:19.617 11:18:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:21:19.617 11:18:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:21:19.617 | select(.opcode=="crc32c") 01:21:19.617 | "\(.module_name) \(.executed)"' 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112089 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112089 ']' 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112089 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112089 01:21:19.875 killing process with pid 112089 01:21:19.875 Received shutdown signal, test time was about 2.000000 seconds 01:21:19.875 01:21:19.875 Latency(us) 01:21:19.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:19.875 =================================================================================================================== 01:21:19.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112089' 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112089 01:21:19.875 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112089 01:21:20.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112166 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112166 /var/tmp/bperf.sock 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112166 ']' 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:20.134 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:20.134 [2024-07-22 11:18:25.294996] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:20.134 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:20.134 Zero copy mechanism will not be used. 01:21:20.134 [2024-07-22 11:18:25.295064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112166 ] 01:21:20.392 [2024-07-22 11:18:25.431178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:20.392 [2024-07-22 11:18:25.498724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:20.392 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:20.392 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:21:20.392 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:21:20.392 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:21:20.392 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:21:20.652 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:20.652 11:18:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:20.910 nvme0n1 01:21:20.910 11:18:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:21:20.910 11:18:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:21.168 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:21.168 Zero copy mechanism will not be used. 01:21:21.168 Running I/O for 2 seconds... 01:21:23.064 01:21:23.064 Latency(us) 01:21:23.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:23.065 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:21:23.065 nvme0n1 : 2.00 7916.76 989.60 0.00 0.00 2017.67 558.55 9055.88 01:21:23.065 =================================================================================================================== 01:21:23.065 Total : 7916.76 989.60 0.00 0.00 2017.67 558.55 9055.88 01:21:23.065 0 01:21:23.065 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:21:23.065 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:21:23.065 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:21:23.065 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:21:23.065 | select(.opcode=="crc32c") 01:21:23.065 | "\(.module_name) \(.executed)"' 01:21:23.065 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112166 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112166 ']' 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112166 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112166 01:21:23.322 killing process with pid 112166 01:21:23.322 Received shutdown signal, test time was about 2.000000 seconds 01:21:23.322 01:21:23.322 Latency(us) 01:21:23.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:23.322 =================================================================================================================== 01:21:23.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112166' 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112166 01:21:23.322 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112166 01:21:23.579 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:21:23.579 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:21:23.579 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:21:23.579 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:21:23.579 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:21:23.579 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:21:23.579 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112233 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112233 /var/tmp/bperf.sock 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112233 ']' 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:23.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:23.580 11:18:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:23.580 [2024-07-22 11:18:28.728867] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:23.580 [2024-07-22 11:18:28.728951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112233 ] 01:21:23.837 [2024-07-22 11:18:28.868185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:23.837 [2024-07-22 11:18:28.934927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:24.770 11:18:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:24.770 11:18:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:21:24.770 11:18:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:21:24.770 11:18:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:21:24.770 11:18:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:21:24.770 11:18:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:24.770 11:18:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:25.333 nvme0n1 01:21:25.333 11:18:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:21:25.333 11:18:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:25.333 Running I/O for 2 seconds... 01:21:27.284 01:21:27.284 Latency(us) 01:21:27.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:27.284 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:21:27.284 nvme0n1 : 2.00 26245.67 102.52 0.00 0.00 4870.23 1936.29 7983.48 01:21:27.284 =================================================================================================================== 01:21:27.284 Total : 26245.67 102.52 0.00 0.00 4870.23 1936.29 7983.48 01:21:27.284 0 01:21:27.284 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:21:27.284 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:21:27.284 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:21:27.284 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:21:27.284 | select(.opcode=="crc32c") 01:21:27.284 | "\(.module_name) \(.executed)"' 01:21:27.284 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112233 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112233 ']' 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112233 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112233 01:21:27.541 killing process with pid 112233 01:21:27.541 Received shutdown signal, test time was about 2.000000 seconds 01:21:27.541 01:21:27.541 Latency(us) 01:21:27.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:27.541 =================================================================================================================== 01:21:27.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112233' 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112233 01:21:27.541 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112233 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112323 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112323 /var/tmp/bperf.sock 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112323 ']' 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:27.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:27.800 11:18:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:27.800 [2024-07-22 11:18:32.911888] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:27.800 [2024-07-22 11:18:32.912237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112323 ] 01:21:27.800 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:27.800 Zero copy mechanism will not be used. 01:21:28.066 [2024-07-22 11:18:33.048734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:28.066 [2024-07-22 11:18:33.120854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:28.997 11:18:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:28.997 11:18:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 01:21:28.997 11:18:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:21:28.997 11:18:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:21:28.997 11:18:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:21:28.997 11:18:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:28.997 11:18:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:29.255 nvme0n1 01:21:29.255 11:18:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:21:29.256 11:18:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:29.514 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:29.514 Zero copy mechanism will not be used. 01:21:29.514 Running I/O for 2 seconds... 01:21:31.416 01:21:31.416 Latency(us) 01:21:31.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:31.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:21:31.416 nvme0n1 : 2.00 6358.67 794.83 0.00 0.00 2511.23 2025.66 12571.00 01:21:31.416 =================================================================================================================== 01:21:31.416 Total : 6358.67 794.83 0.00 0.00 2511.23 2025.66 12571.00 01:21:31.416 0 01:21:31.416 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:21:31.416 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:21:31.416 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:21:31.416 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:21:31.416 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:21:31.416 | select(.opcode=="crc32c") 01:21:31.416 | "\(.module_name) \(.executed)"' 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112323 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112323 ']' 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112323 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112323 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:31.674 killing process with pid 112323 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112323' 01:21:31.674 Received shutdown signal, test time was about 2.000000 seconds 01:21:31.674 01:21:31.674 Latency(us) 01:21:31.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:31.674 =================================================================================================================== 01:21:31.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112323 01:21:31.674 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112323 01:21:31.932 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 112033 01:21:31.932 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112033 ']' 01:21:31.932 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112033 01:21:31.932 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 01:21:31.932 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:31.932 11:18:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112033 01:21:31.932 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:21:31.932 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:21:31.932 killing process with pid 112033 01:21:31.933 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112033' 01:21:31.933 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112033 01:21:31.933 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112033 01:21:32.191 01:21:32.191 real 0m16.793s 01:21:32.191 user 0m30.068s 01:21:32.191 sys 0m5.262s 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:21:32.191 ************************************ 01:21:32.191 END TEST nvmf_digest_clean 01:21:32.191 ************************************ 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:21:32.191 ************************************ 01:21:32.191 START TEST nvmf_digest_error 01:21:32.191 ************************************ 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=112438 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 112438 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112438 ']' 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:32.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:32.191 11:18:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:32.450 [2024-07-22 11:18:37.417471] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:32.450 [2024-07-22 11:18:37.417552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:32.450 [2024-07-22 11:18:37.546031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:32.450 [2024-07-22 11:18:37.628742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:32.450 [2024-07-22 11:18:37.628810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:32.450 [2024-07-22 11:18:37.628821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:32.450 [2024-07-22 11:18:37.628829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:32.450 [2024-07-22 11:18:37.628835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:32.450 [2024-07-22 11:18:37.628860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:33.385 [2024-07-22 11:18:38.425408] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:33.385 null0 01:21:33.385 [2024-07-22 11:18:38.538174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:33.385 [2024-07-22 11:18:38.562313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112482 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112482 /var/tmp/bperf.sock 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112482 ']' 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:33.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:33.385 11:18:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:33.657 [2024-07-22 11:18:38.615634] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:33.657 [2024-07-22 11:18:38.615775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112482 ] 01:21:33.657 [2024-07-22 11:18:38.752051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:33.657 [2024-07-22 11:18:38.829739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:34.649 11:18:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:34.906 nvme0n1 01:21:34.906 11:18:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:21:34.906 11:18:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:34.906 11:18:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:34.906 11:18:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:34.906 11:18:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:21:34.906 11:18:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:35.165 Running I/O for 2 seconds... 01:21:35.165 [2024-07-22 11:18:40.173885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.173946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.174001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.185392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.185444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.185472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.197661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.197713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.197741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.211047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.211109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.211137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.220980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.221014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.221041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.233190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.233240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.233268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.247146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.247201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.247231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.261309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.261362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.261391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.276483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.276568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.276613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.290245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.290299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.290328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.306687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.306740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.306784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.320078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.320160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.320188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.333764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.333814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.333842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.349033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.349102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.349131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.165 [2024-07-22 11:18:40.363060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.165 [2024-07-22 11:18:40.363121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.165 [2024-07-22 11:18:40.363149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.375892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.376012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.376042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.389882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.389932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.389959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.402182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.402220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.402248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.415630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.415704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.415733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.426863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.426933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.426946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.442432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.442494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.442523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.457621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.457673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.457702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.471278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.471347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.471376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.486260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.486296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.486309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.500228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.500264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.500292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.512861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.512900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.512929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.525051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.525137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.525166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.540147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.540182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.425 [2024-07-22 11:18:40.540210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.425 [2024-07-22 11:18:40.554034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.425 [2024-07-22 11:18:40.554084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.426 [2024-07-22 11:18:40.554121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.426 [2024-07-22 11:18:40.567316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.426 [2024-07-22 11:18:40.567354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.426 [2024-07-22 11:18:40.567381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.426 [2024-07-22 11:18:40.579187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.426 [2024-07-22 11:18:40.579220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.426 [2024-07-22 11:18:40.579263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.426 [2024-07-22 11:18:40.592616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.426 [2024-07-22 11:18:40.592651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.426 [2024-07-22 11:18:40.592679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.426 [2024-07-22 11:18:40.605646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.426 [2024-07-22 11:18:40.605680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.426 [2024-07-22 11:18:40.605709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.426 [2024-07-22 11:18:40.617484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.426 [2024-07-22 11:18:40.617518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.426 [2024-07-22 11:18:40.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.426 [2024-07-22 11:18:40.630547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.426 [2024-07-22 11:18:40.630582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.426 [2024-07-22 11:18:40.630610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.642017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.642051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.642078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.652582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.652616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.652643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.663759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.663796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.663824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.674332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.674365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.674392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.685077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.685111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.685140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.697518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.697552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.697580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.707170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.707203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.707231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.718772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.718805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.718838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.730756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.730791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.730822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.740422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.740456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.740489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.750691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.750725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.750756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.761713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.761746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.761779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.773691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.773725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.773756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.782552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.782586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.782619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.793933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.793976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.794005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.804612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.804647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.804677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.815553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.815586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.815618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.826048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.826099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.826127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.836526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.836560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.836593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.847457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.847490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.847521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.859527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.686 [2024-07-22 11:18:40.859562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.686 [2024-07-22 11:18:40.859589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.686 [2024-07-22 11:18:40.869257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.687 [2024-07-22 11:18:40.869290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.687 [2024-07-22 11:18:40.869321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.687 [2024-07-22 11:18:40.880800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.687 [2024-07-22 11:18:40.880835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.687 [2024-07-22 11:18:40.880866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.945 [2024-07-22 11:18:40.893706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.945 [2024-07-22 11:18:40.893741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.945 [2024-07-22 11:18:40.893769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.945 [2024-07-22 11:18:40.904733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.945 [2024-07-22 11:18:40.904768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.904795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.913730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.913765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.913791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.925243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.925304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.937652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.937687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.937714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.948514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.948548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.948575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.957618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.957651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.957678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.970942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.970988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.971016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.980397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.980431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.980458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:40.991240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:40.991273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:40.991300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.001971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.002021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.002048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.011683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.011734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.011765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.023770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.023821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.023849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.035236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.035288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.035301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.046038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.046090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.046102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.057906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.057939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.057967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.068501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.068536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.068563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.079537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.079569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.079596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.091199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.091232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.091259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.103278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.103327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.103355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.115799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.115849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.115877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.124285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.124317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.124344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.137108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.137141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.137169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:35.946 [2024-07-22 11:18:41.147456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:35.946 [2024-07-22 11:18:41.147490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:35.946 [2024-07-22 11:18:41.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.158106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.158155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.158182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.169697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.169730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.169757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.181191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.181225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.181253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.190500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.190534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.190561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.202477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.202511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.202539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.212668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.212702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.212730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.223295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.223329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.223357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.234484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.234518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.234546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.244370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.244403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.244431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.256346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.256381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.256409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.266493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.266528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.266555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.276576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.276609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.276637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.288905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.288939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.288966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.301589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.301642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.301671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.314913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.314988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.315001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.325536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.325570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.325597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.337062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.337095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.337123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.348052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.206 [2024-07-22 11:18:41.348086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.206 [2024-07-22 11:18:41.348114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.206 [2024-07-22 11:18:41.358718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.207 [2024-07-22 11:18:41.358754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.207 [2024-07-22 11:18:41.358782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.207 [2024-07-22 11:18:41.369715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.207 [2024-07-22 11:18:41.369749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.207 [2024-07-22 11:18:41.369776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.207 [2024-07-22 11:18:41.379588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.207 [2024-07-22 11:18:41.379625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.207 [2024-07-22 11:18:41.379675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.207 [2024-07-22 11:18:41.391757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.207 [2024-07-22 11:18:41.391806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.207 [2024-07-22 11:18:41.391834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.207 [2024-07-22 11:18:41.402922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.207 [2024-07-22 11:18:41.402982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.207 [2024-07-22 11:18:41.403011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.413859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.413893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.413921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.424171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.424204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.424231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.436503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.436537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.436564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.446836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.446871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.446898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.458308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.458342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.458369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.467849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.467898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.467925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.478015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.478048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.478075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.489966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.489998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.490025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.501147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.501180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.501207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.512222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.512271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.512299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.521777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.521811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.521838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.533761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.533795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.466 [2024-07-22 11:18:41.533822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.466 [2024-07-22 11:18:41.544073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.466 [2024-07-22 11:18:41.544106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.544133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.556185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.556219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.556246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.567020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.567070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.567097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.577440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.577473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.577500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.589372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.589405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.589432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.599455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.599488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.599516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.610052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.610085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.610112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.622115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.622148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.622176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.633092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.633125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.633152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.643967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.643999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.644026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.653085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.653118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.653146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.467 [2024-07-22 11:18:41.662657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.467 [2024-07-22 11:18:41.662691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.467 [2024-07-22 11:18:41.662718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.674741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.674775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.674803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.686985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.687034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.687062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.697249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.697283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.697310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.709263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.709297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.709325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.719522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.719576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.719588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.732118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.732168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.732181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.742954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.743034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.743048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.755838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.755894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.755923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.766242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.766293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.766321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.778559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.778610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.778638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.790656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.790706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.790733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.800473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.800536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.800564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.812116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.812165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.812193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.823876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.823930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.823958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.834077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.834128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.834156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.845243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.845294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.845321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.856161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.856210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.856238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.868465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.868515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.868543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.878769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.878821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.878848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.889648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.889698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.889726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.902158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.902209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.902237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.913946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.914020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.914049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.726 [2024-07-22 11:18:41.925379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.726 [2024-07-22 11:18:41.925429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.726 [2024-07-22 11:18:41.925458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.985 [2024-07-22 11:18:41.936860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.985 [2024-07-22 11:18:41.936911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.985 [2024-07-22 11:18:41.936940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.985 [2024-07-22 11:18:41.948045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.985 [2024-07-22 11:18:41.948110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.985 [2024-07-22 11:18:41.948138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.985 [2024-07-22 11:18:41.958334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.985 [2024-07-22 11:18:41.958399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.985 [2024-07-22 11:18:41.958427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.985 [2024-07-22 11:18:41.969816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.985 [2024-07-22 11:18:41.969867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.985 [2024-07-22 11:18:41.969895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.985 [2024-07-22 11:18:41.981915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.985 [2024-07-22 11:18:41.981949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:41.982001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:41.993659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:41.993693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:41.993720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.004253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.004287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.004314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.015097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.015130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.015158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.024812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.024846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.024874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.036789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.036823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.036851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.046641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.046675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.046702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.057764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.057797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.057826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.070738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.070772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.070800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.082104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.082155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.082168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.093112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.093165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.093177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.103701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.103751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.103779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.114604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.114638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.114666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.125799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.125834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.125861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.136206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.136269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.136297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.148832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.148866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.148892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 [2024-07-22 11:18:42.157939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae9a40) 01:21:36.986 [2024-07-22 11:18:42.157985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:36.986 [2024-07-22 11:18:42.158012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:36.986 01:21:36.986 Latency(us) 01:21:36.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:36.986 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:21:36.986 nvme0n1 : 2.00 22054.85 86.15 0.00 0.00 5796.91 2532.07 18826.71 01:21:36.986 =================================================================================================================== 01:21:36.986 Total : 22054.85 86.15 0.00 0.00 5796.91 2532.07 18826.71 01:21:36.986 0 01:21:36.986 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:21:36.986 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:21:36.986 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:21:36.986 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:21:36.986 | .driver_specific 01:21:36.986 | .nvme_error 01:21:36.986 | .status_code 01:21:36.986 | .command_transient_transport_error' 01:21:37.245 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 173 > 0 )) 01:21:37.245 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112482 01:21:37.245 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112482 ']' 01:21:37.245 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112482 01:21:37.245 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:21:37.245 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:37.245 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112482 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:37.503 killing process with pid 112482 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112482' 01:21:37.503 Received shutdown signal, test time was about 2.000000 seconds 01:21:37.503 01:21:37.503 Latency(us) 01:21:37.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:37.503 =================================================================================================================== 01:21:37.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112482 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112482 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112567 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112567 /var/tmp/bperf.sock 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112567 ']' 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:37.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:37.503 11:18:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:37.503 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:37.503 Zero copy mechanism will not be used. 01:21:37.503 [2024-07-22 11:18:42.682659] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:37.503 [2024-07-22 11:18:42.682758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112567 ] 01:21:37.762 [2024-07-22 11:18:42.815744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:37.762 [2024-07-22 11:18:42.882024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:38.698 11:18:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:38.956 nvme0n1 01:21:38.957 11:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:21:38.957 11:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:38.957 11:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:38.957 11:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:38.957 11:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:21:38.957 11:18:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:39.216 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:39.216 Zero copy mechanism will not be used. 01:21:39.217 Running I/O for 2 seconds... 01:21:39.217 [2024-07-22 11:18:44.203198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.203245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.203274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.207590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.207626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.207693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.212375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.212410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.212437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.215499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.215533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.215561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.220407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.220443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.220470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.224178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.224213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.224240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.227472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.227506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.227533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.231396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.231432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.231460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.236160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.236212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.236240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.240245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.240298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.240327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.243670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.243739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.243751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.248227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.248279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.248322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.253291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.253360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.257435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.257486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.257513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.260403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.260469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.260497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.265078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.265131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.265160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.269851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.269901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.269929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.273176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.273228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.273256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.277477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.277527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.277554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.282571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.282622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.282650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.285957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.286033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.286061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.290050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.290102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.290130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.293645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.293696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.293723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.297617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.297668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.297695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.301566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.301617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.301645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.306288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.306339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.306382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.310659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.310709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.310736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.217 [2024-07-22 11:18:44.315010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.217 [2024-07-22 11:18:44.315060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.217 [2024-07-22 11:18:44.315087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.319328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.319377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.319406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.322145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.322195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.322223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.326573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.326622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.326649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.331143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.331192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.331220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.335414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.335463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.335490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.340322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.340373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.340401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.343365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.343417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.343430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.347769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.347811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.347825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.352876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.352927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.352955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.357824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.357876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.357904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.362216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.362267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.362295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.365373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.365424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.365452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.369886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.369939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.369983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.374836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.374889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.374917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.379225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.379275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.379303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.383177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.383243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.383255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.386728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.386779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.386807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.390496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.390548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.390576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.395025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.395087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.395100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.399503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.399557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.399585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.402715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.402766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.402794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.406783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.406834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.406861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.411287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.411337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.411366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.415919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.415998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.416012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.218 [2024-07-22 11:18:44.420236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.218 [2024-07-22 11:18:44.420306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.218 [2024-07-22 11:18:44.420320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.424169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.424235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.424262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.428873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.428940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.428952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.432600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.432652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.432680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.437010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.437061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.437089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.441698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.441751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.441779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.446216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.446268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.446297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.449391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.449443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.449470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.453134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.453185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.453213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.457807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.457859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.457887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.462462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.462514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.462542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.465489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.465539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.465566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.478 [2024-07-22 11:18:44.469583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.478 [2024-07-22 11:18:44.469635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.478 [2024-07-22 11:18:44.469664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.474868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.474923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.474952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.480165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.480219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.480247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.483316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.483367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.483395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.488170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.488240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.488270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.492936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.492999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.493027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.497527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.497578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.497606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.501698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.501750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.501778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.504828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.504879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.504907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.509557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.509622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.509650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.515073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.515127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.515155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.520057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.520124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.520152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.523081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.523131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.523159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.527461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.527513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.527540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.532307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.532359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.532387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.536690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.536740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.536784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.541186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.541237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.541265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.545351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.545400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.545428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.548060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.548110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.548151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.552880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.552931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.552961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.556743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.556793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.556821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.560584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.560635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.560663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.565022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.565055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.565082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.569270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.569320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.569347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.572529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.572579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.572605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.576813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.576864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.576892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.580331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.580381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.580408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.584712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.584762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.584790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.588152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.588220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.588249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.479 [2024-07-22 11:18:44.592869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.479 [2024-07-22 11:18:44.592920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.479 [2024-07-22 11:18:44.592948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.595554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.595603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.595631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.600284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.600334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.600361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.604093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.604142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.604169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.607963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.608040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.608067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.611590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.611663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.611713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.615642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.615733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.615762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.620034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.620083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.620125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.623730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.623769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.623797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.627510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.627560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.627587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.631627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.631701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.631729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.635429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.635479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.635507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.639666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.639722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.639735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.644459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.644526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.644554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.648587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.648639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.648667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.652688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.652741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.652768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.657951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.658060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.658075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.662981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.663057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.663102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.667490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.667542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.667554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.670930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.670992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.671021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.675703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.675743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.675757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.480 [2024-07-22 11:18:44.680305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.480 [2024-07-22 11:18:44.680380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.480 [2024-07-22 11:18:44.680407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.739 [2024-07-22 11:18:44.685322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.739 [2024-07-22 11:18:44.685383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.739 [2024-07-22 11:18:44.685411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.739 [2024-07-22 11:18:44.689766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.739 [2024-07-22 11:18:44.689819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.739 [2024-07-22 11:18:44.689831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.693067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.693118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.693146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.697937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.698015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.702349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.702415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.702458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.705656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.705690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.705717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.710390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.710425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.710451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.714925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.715018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.715032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.719026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.719059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.719086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.721706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.721739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.721765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.725762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.725797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.725824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.729835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.729870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.729897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.733239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.733272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.733299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.737037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.737070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.737098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.740550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.740600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.740628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.744832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.744866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.744893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.749200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.749250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.749277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.752542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.752590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.752618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.757292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.757344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.757371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.762311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.762364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.762396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.765719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.765786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.765813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.770243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.770293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.770320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.774945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.775021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.775059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.779445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.779478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.779505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.783844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.783898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.783911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.787083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.787133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.787162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.791390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.791440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.791467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.795497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.795547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.795574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.799923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.800010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.800054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.804556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.740 [2024-07-22 11:18:44.804591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.740 [2024-07-22 11:18:44.804618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.740 [2024-07-22 11:18:44.807827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.807880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.807908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.812391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.812425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.812453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.816596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.816647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.816675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.821289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.821340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.821368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.825094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.825148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.825176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.829409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.829460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.829487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.833423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.833467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.833495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.837756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.837790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.837818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.841744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.841779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.841806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.845799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.845834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.845861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.849486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.849520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.849546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.853729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.853763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.853790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.858853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.858903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.858931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.862740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.862793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.866936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.866980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.867008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.871692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.871747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.871777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.876322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.876356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.876382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.879631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.879726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.879756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.884068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.884131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.884157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.888929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.888988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.889001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.891738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.891792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.891821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.895977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.896072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.896100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.900878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.900912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.900939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.904460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.904493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.904520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.908315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.908385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.908412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.912861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.912912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.912940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.917346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.917409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.917425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.921800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.921851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.921878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.925530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.925565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.741 [2024-07-22 11:18:44.925592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:39.741 [2024-07-22 11:18:44.930614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.741 [2024-07-22 11:18:44.930647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.742 [2024-07-22 11:18:44.930674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:39.742 [2024-07-22 11:18:44.935307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.742 [2024-07-22 11:18:44.935342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.742 [2024-07-22 11:18:44.935369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:39.742 [2024-07-22 11:18:44.938352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.742 [2024-07-22 11:18:44.938385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.742 [2024-07-22 11:18:44.938420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:39.742 [2024-07-22 11:18:44.943311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:39.742 [2024-07-22 11:18:44.943362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:39.742 [2024-07-22 11:18:44.943389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.001 [2024-07-22 11:18:44.946667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.001 [2024-07-22 11:18:44.946719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.001 [2024-07-22 11:18:44.946730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.001 [2024-07-22 11:18:44.950647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.001 [2024-07-22 11:18:44.950697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.001 [2024-07-22 11:18:44.950725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.001 [2024-07-22 11:18:44.955799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.001 [2024-07-22 11:18:44.955853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.001 [2024-07-22 11:18:44.955866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.001 [2024-07-22 11:18:44.960558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.001 [2024-07-22 11:18:44.960625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.001 [2024-07-22 11:18:44.960637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.001 [2024-07-22 11:18:44.965103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.965135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.965162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.968173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.968223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.968251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.972412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.972447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.972474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.976590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.976625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.976651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.980674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.980709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.980737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.984392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.984425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.984452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.987925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.988025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.988039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.992062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.992136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.992165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.995752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.995804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:44.999300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:44.999334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:44.999360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.002928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.002988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.003001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.006261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.006294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.006321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.010517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.010551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.010578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.014764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.014799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.014826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.018504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.018540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.018567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.022439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.022474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.022501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.026635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.026670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.026697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.029727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.029760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.029787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.034391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.034430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.034458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.039220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.039255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.039282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.042425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.042459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.042485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.046815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.046850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.046876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.051986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.052064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.052093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.057500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.057549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.057577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.061994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.062044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.062071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.064816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.064865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.064893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.069478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.069527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.069554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.073681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.073715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.073742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.002 [2024-07-22 11:18:45.077876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.002 [2024-07-22 11:18:45.077910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.002 [2024-07-22 11:18:45.077937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.082277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.082327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.082354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.086506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.086556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.086584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.089879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.089929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.089956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.094615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.094649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.094676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.099631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.099724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.099753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.102899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.102933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.102959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.107015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.107049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.107076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.111479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.111530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.111557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.115723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.115764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.115793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.119860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.119920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.119948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.124469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.124521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.124549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.128638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.128671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.128698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.132863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.132897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.132924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.137043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.137087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.137117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.140828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.140862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.140889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.145543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.145594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.145621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.150295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.150331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.150373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.153906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.153941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.153984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.159126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.159177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.159204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.162410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.162453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.162480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.166949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.167007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.167052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.172463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.172512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.172539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.176787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.176839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.176852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.180447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.180499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.180526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.184905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.184939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.184966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.189349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.189383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.189420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.192803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.192838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.192865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.197123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.197174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.003 [2024-07-22 11:18:45.197202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.003 [2024-07-22 11:18:45.201342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.003 [2024-07-22 11:18:45.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.004 [2024-07-22 11:18:45.201436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.004 [2024-07-22 11:18:45.206400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.004 [2024-07-22 11:18:45.206435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.004 [2024-07-22 11:18:45.206463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.210699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.210734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.210762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.214402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.214436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.214463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.218847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.218882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.218910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.222440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.222475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.222502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.226201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.226236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.226263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.230016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.230051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.230078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.233703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.233738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.233766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.237852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.237904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.237946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.241858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.241910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.241938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.246636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.246687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.246714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.250525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.250560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.250587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.254979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.255012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.255039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.259444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.259478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.259505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.262764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.262815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.262842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.267717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.267771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.267799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.272762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.272797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.272823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.276437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.276470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.276497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.280680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.280715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.280741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.285672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.285705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.285732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.290735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.290769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.290795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.294378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.294412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.294451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.298848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.298882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.298909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.303697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.303752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.303780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.306920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.306955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.306995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.311501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.311536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.311563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.316280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.316331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.316373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.319508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.319542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.319569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.324103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.324153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.324181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.329304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.329356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.329401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.332990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.333062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.333090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.337688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.337723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.337750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.341728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.341762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.341788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.345372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.345414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.345441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.349253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.349304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.349332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.353356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.353450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.353478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.357802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.357856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.357901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.362667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.362735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.362763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.367306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.367374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.367403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.370645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.370697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.370725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.375690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.375731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.375745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.380573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.380626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.380654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.385320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.263 [2024-07-22 11:18:45.385388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.263 [2024-07-22 11:18:45.385417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.263 [2024-07-22 11:18:45.389143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.389210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.389239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.394368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.394440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.394453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.400122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.400186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.400214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.403442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.403476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.403502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.408085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.408151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.408178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.411551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.411585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.411612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.416331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.416375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.416409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.421105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.421156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.421184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.425952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.426027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.426066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.430119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.430173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.430202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.434742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.434793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.434820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.438865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.438907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.438921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.443460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.443494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.443521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.448677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.448712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.448740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.451946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.452024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.452099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.456874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.456910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.456937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.461996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.462047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.462074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.264 [2024-07-22 11:18:45.465493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.264 [2024-07-22 11:18:45.465560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.264 [2024-07-22 11:18:45.465572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.470286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.470337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.470364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.474926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.474984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.474997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.477986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.478036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.478063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.482205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.482255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.482282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.486585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.486624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.486651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.491227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.491262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.491289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.494530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.494564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.494591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.499193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.499227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.499255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.503007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.503039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.503073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.507506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.507541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.507567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.512757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.512792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.512820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.517196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.517231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.517257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.520710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.520743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.520770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.525712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.525746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.525773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.529416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.529450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.529477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.533528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.533562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.533590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.537765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.537799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.537826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.542669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.542719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.542747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.545921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.546010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.546024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.550751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.550801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.550828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.555284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.555336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.555387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.558817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.558869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.558897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.563970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.564050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.564079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.567829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.567869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.567898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.572476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.572526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.572553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.576083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.576150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.576179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.580607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.580656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.580684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.584779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.584828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.584855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.589102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.589153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.589180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.594072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.594122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.594149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.597013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.597075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.597103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.601539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.601590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.601617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.606476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.606529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.606557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.611186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.611237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.611264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.614865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.614914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.614941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.619533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.619584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.619611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.624802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.624852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.624880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.628558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.628607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.628634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.633234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.633284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.633311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.637972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.638031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.638071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.641582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.641632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.641660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.646589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.646639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.646665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.651486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.651551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.651578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.654876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.654925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.654952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.659023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.659085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.659112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.663023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.663084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.663111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.667837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.667891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.667919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.672400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.672450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.672486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.523 [2024-07-22 11:18:45.675692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.523 [2024-07-22 11:18:45.675728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.523 [2024-07-22 11:18:45.675756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.679744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.679797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.679824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.684798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.684849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.684875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.688552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.688601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.688628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.693288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.693340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.693368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.697859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.697911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.697939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.702050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.702110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.702137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.705576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.705626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.705654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.709988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.710037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.710075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.714766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.714816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.714844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.719361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.719412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.719439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.524 [2024-07-22 11:18:45.723243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.524 [2024-07-22 11:18:45.723309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.524 [2024-07-22 11:18:45.723321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.728426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.728479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.728492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.733425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.733479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.733491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.737132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.737183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.737211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.741400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.741452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.741480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.745641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.745692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.745720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.750000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.750049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.750084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.754673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.754723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.754751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.758501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.758552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.758579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.762918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.762993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.763007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.767937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.768001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.768034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.771741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.771811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.771839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.776815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.776864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.776891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.781419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.781471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.781498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.784617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.784667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.784694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.788748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.788799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.788841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.793083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.793134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.793161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.796956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.797032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.797060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.801260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.801313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.801340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.805758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.805809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.805836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.809872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.809922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.809950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.813314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.813365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.813392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.817770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.817820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.817847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.822706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.822757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.822784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.827256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.827294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.827323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.830976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.831074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.831087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.836356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.836406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.836445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.840252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.840301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.840329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.843747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.843785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.843798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.848151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.848200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.848228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.852168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.852219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.852247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.857367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.857424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.857452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.862655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.862705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.862732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.867293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.867342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.867369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.870319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.870369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.870410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.874739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.874789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.782 [2024-07-22 11:18:45.874817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.782 [2024-07-22 11:18:45.878981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.782 [2024-07-22 11:18:45.879014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.879050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.883177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.883211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.883238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.886732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.886765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.886792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.890900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.890934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.890961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.894645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.894679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.894706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.899246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.899297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.899325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.904404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.904439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.904473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.908193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.908227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.908253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.912544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.912578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.912606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.916379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.916412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.916439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.921020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.921055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.921083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.924718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.924753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.924780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.928870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.928904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.928931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.933037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.933082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.933109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.937209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.937271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.937298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.941077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.941111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.941137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.945312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.945346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.945373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.949392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.949427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.949464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.953462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.953496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.953523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.956990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.957024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.957051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.961428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.961469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.961496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.965533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.965568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.965595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.970134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.970183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.970211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.973716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.973750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.973776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.978492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.978526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.978553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:40.783 [2024-07-22 11:18:45.983434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:40.783 [2024-07-22 11:18:45.983483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:40.783 [2024-07-22 11:18:45.983534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:45.988801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:45.988883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:45.988895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:45.992077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:45.992156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:45.992183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:45.996665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:45.996699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:45.996726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.000826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.000861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.000888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.003976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.004052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.004080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.008472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.008507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.008535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.013111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.013145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.013172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.016784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.016817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.016844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.020839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.020873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.020901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.024161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.024194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.024220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.029203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.029271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.029282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.033088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.033121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.033147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.037233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.037267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.037294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.042051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.042086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.042113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.045515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.045549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.045578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.050311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.050347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.050374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.054465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.054499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.054526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.058790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.058824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.058851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.063309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.063360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.063404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.067998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.042 [2024-07-22 11:18:46.068081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.042 [2024-07-22 11:18:46.068122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.042 [2024-07-22 11:18:46.071743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.071795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.071823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.076259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.076292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.076319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.081067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.081102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.081129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.084548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.084582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.084609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.089471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.089505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.089532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.093523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.093557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.093584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.097121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.097172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.097200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.101592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.101625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.101653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.105273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.105349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.109661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.109695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.109723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.113868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.113902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.113929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.118176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.118210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.118238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.122200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.122235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.122262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.125701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.125735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.125761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.129966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.130011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.130023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.134329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.134412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.134440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.138600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.138649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.138676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.143848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.143923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.143937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.148033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.148094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.148107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.152885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.152921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.152948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.158189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.158243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.158276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.162010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.162087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.162116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.166966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.167054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.167096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.172129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.172180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.172208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.176115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.176163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.176190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.181172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.181206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.181233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.184851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.184885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.184912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.189762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.189797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.043 [2024-07-22 11:18:46.189825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:41.043 [2024-07-22 11:18:46.194273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8a52f0) 01:21:41.043 [2024-07-22 11:18:46.194324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:41.044 [2024-07-22 11:18:46.194352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:41.044 01:21:41.044 Latency(us) 01:21:41.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:41.044 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:21:41.044 nvme0n1 : 2.00 7308.71 913.59 0.00 0.00 2185.68 603.23 5928.03 01:21:41.044 =================================================================================================================== 01:21:41.044 Total : 7308.71 913.59 0.00 0.00 2185.68 603.23 5928.03 01:21:41.044 0 01:21:41.044 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:21:41.044 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:21:41.044 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:21:41.044 | .driver_specific 01:21:41.044 | .nvme_error 01:21:41.044 | .status_code 01:21:41.044 | .command_transient_transport_error' 01:21:41.044 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:21:41.304 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 471 > 0 )) 01:21:41.304 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112567 01:21:41.304 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112567 ']' 01:21:41.304 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112567 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112567 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:41.562 killing process with pid 112567 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112567' 01:21:41.562 Received shutdown signal, test time was about 2.000000 seconds 01:21:41.562 01:21:41.562 Latency(us) 01:21:41.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:41.562 =================================================================================================================== 01:21:41.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112567 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112567 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112652 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112652 /var/tmp/bperf.sock 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112652 ']' 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:41.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:41.562 11:18:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:41.562 [2024-07-22 11:18:46.765710] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:41.562 [2024-07-22 11:18:46.765831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112652 ] 01:21:41.820 [2024-07-22 11:18:46.895505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:41.820 [2024-07-22 11:18:46.962915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:42.077 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:42.077 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:21:42.077 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:42.077 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:42.335 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:21:42.335 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:42.335 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:42.335 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:42.335 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:42.335 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:42.594 nvme0n1 01:21:42.594 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:21:42.594 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:42.594 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:42.594 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:42.594 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:21:42.594 11:18:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:42.594 Running I/O for 2 seconds... 01:21:42.594 [2024-07-22 11:18:47.736254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190df988 01:21:42.594 [2024-07-22 11:18:47.736940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.594 [2024-07-22 11:18:47.737002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:21:42.594 [2024-07-22 11:18:47.747616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e01f8 01:21:42.594 [2024-07-22 11:18:47.748434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.594 [2024-07-22 11:18:47.748484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:21:42.594 [2024-07-22 11:18:47.759356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190de038 01:21:42.594 [2024-07-22 11:18:47.760156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.594 [2024-07-22 11:18:47.760205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:21:42.594 [2024-07-22 11:18:47.772282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190feb58 01:21:42.594 [2024-07-22 11:18:47.773428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.594 [2024-07-22 11:18:47.773458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:42.594 [2024-07-22 11:18:47.783784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190feb58 01:21:42.594 [2024-07-22 11:18:47.785541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.594 [2024-07-22 11:18:47.785571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:42.594 [2024-07-22 11:18:47.791002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e88f8 01:21:42.594 [2024-07-22 11:18:47.791844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.594 [2024-07-22 11:18:47.791907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:21:42.851 [2024-07-22 11:18:47.803268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1430 01:21:42.851 [2024-07-22 11:18:47.804360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.851 [2024-07-22 11:18:47.804424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:42.851 [2024-07-22 11:18:47.814146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1430 01:21:42.851 [2024-07-22 11:18:47.815174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.851 [2024-07-22 11:18:47.815222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:42.851 [2024-07-22 11:18:47.824685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190feb58 01:21:42.851 [2024-07-22 11:18:47.825796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.851 [2024-07-22 11:18:47.825827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:42.851 [2024-07-22 11:18:47.835789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1430 01:21:42.851 [2024-07-22 11:18:47.836704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.851 [2024-07-22 11:18:47.836734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:21:42.851 [2024-07-22 11:18:47.847536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f92c0 01:21:42.852 [2024-07-22 11:18:47.848730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.848760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.858539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ec840 01:21:42.852 [2024-07-22 11:18:47.859510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.859540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.871255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f3a28 01:21:42.852 [2024-07-22 11:18:47.872653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.872684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.884249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f3a28 01:21:42.852 [2024-07-22 11:18:47.886126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.886156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.892514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3d08 01:21:42.852 [2024-07-22 11:18:47.893280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.893326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.906912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e6b70 01:21:42.852 [2024-07-22 11:18:47.908879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.908910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.918847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fef90 01:21:42.852 [2024-07-22 11:18:47.920930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.920974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.928921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190dece0 01:21:42.852 [2024-07-22 11:18:47.929900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.929932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.940099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190de470 01:21:42.852 [2024-07-22 11:18:47.940986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.941053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.952093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3060 01:21:42.852 [2024-07-22 11:18:47.953338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.953369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.964976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e1f80 01:21:42.852 [2024-07-22 11:18:47.966920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.966952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.974977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f9f68 01:21:42.852 [2024-07-22 11:18:47.976322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.976351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.985321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e84c0 01:21:42.852 [2024-07-22 11:18:47.986425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.986454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:47.995853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fb8b8 01:21:42.852 [2024-07-22 11:18:47.996926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:47.996980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:48.005683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ed0b0 01:21:42.852 [2024-07-22 11:18:48.006689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:48.006719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:48.015907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ee5c8 01:21:42.852 [2024-07-22 11:18:48.017107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:48.017138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:48.027278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f31b8 01:21:42.852 [2024-07-22 11:18:48.028770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:48.028800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:48.039289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7100 01:21:42.852 [2024-07-22 11:18:48.040962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:48.041017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:21:42.852 [2024-07-22 11:18:48.050383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f2948 01:21:42.852 [2024-07-22 11:18:48.051690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:42.852 [2024-07-22 11:18:48.051738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.061534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fb8b8 01:21:43.109 [2024-07-22 11:18:48.062669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.109 [2024-07-22 11:18:48.062698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.073852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7538 01:21:43.109 [2024-07-22 11:18:48.075247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.109 [2024-07-22 11:18:48.075294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.082891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ebfd0 01:21:43.109 [2024-07-22 11:18:48.084717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.109 [2024-07-22 11:18:48.084748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.093956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fbcf0 01:21:43.109 [2024-07-22 11:18:48.095090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.109 [2024-07-22 11:18:48.095137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.103622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f8e88 01:21:43.109 [2024-07-22 11:18:48.104877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.109 [2024-07-22 11:18:48.104907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.114611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f5378 01:21:43.109 [2024-07-22 11:18:48.115701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.109 [2024-07-22 11:18:48.115751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.127246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fda78 01:21:43.109 [2024-07-22 11:18:48.129149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.109 [2024-07-22 11:18:48.129181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:21:43.109 [2024-07-22 11:18:48.137561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e4578 01:21:43.109 [2024-07-22 11:18:48.138593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.138623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.148969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e4578 01:21:43.110 [2024-07-22 11:18:48.150021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.150069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.160485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f4b08 01:21:43.110 [2024-07-22 11:18:48.161594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.161643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.172080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f4b08 01:21:43.110 [2024-07-22 11:18:48.173093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.173144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.182695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f8e88 01:21:43.110 [2024-07-22 11:18:48.183729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.183779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.196835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e88f8 01:21:43.110 [2024-07-22 11:18:48.198523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.198571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.207594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e4140 01:21:43.110 [2024-07-22 11:18:48.208854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.208901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.218529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e5658 01:21:43.110 [2024-07-22 11:18:48.219756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.219806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.229805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fda78 01:21:43.110 [2024-07-22 11:18:48.230561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.230611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.241041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e0ea0 01:21:43.110 [2024-07-22 11:18:48.242098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.242155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.251891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3498 01:21:43.110 [2024-07-22 11:18:48.252925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.252996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.262934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e6fa8 01:21:43.110 [2024-07-22 11:18:48.263863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.263911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.274807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1430 01:21:43.110 [2024-07-22 11:18:48.275940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.276010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.286810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ff3c8 01:21:43.110 [2024-07-22 11:18:48.288159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.288206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.300188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f2d80 01:21:43.110 [2024-07-22 11:18:48.301897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.301944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:21:43.110 [2024-07-22 11:18:48.307693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190de8a8 01:21:43.110 [2024-07-22 11:18:48.308489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.110 [2024-07-22 11:18:48.308536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.320277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e8d30 01:21:43.367 [2024-07-22 11:18:48.321772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.321818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.332443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190eaef0 01:21:43.367 [2024-07-22 11:18:48.334543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.334589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.340991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f9b30 01:21:43.367 [2024-07-22 11:18:48.341918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.341969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.354931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f6020 01:21:43.367 [2024-07-22 11:18:48.356514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.356561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.367489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f2510 01:21:43.367 [2024-07-22 11:18:48.369262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.369308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.379900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f9b30 01:21:43.367 [2024-07-22 11:18:48.381535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.381581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.388129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fac10 01:21:43.367 [2024-07-22 11:18:48.388951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.388989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.401101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f57b0 01:21:43.367 [2024-07-22 11:18:48.402070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.402124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.414700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fef90 01:21:43.367 [2024-07-22 11:18:48.416537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.416582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.423410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1430 01:21:43.367 [2024-07-22 11:18:48.424340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.424385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.438453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fd640 01:21:43.367 [2024-07-22 11:18:48.440182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.440212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.449262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ec408 01:21:43.367 [2024-07-22 11:18:48.450412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.367 [2024-07-22 11:18:48.450449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:21:43.367 [2024-07-22 11:18:48.462504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fb480 01:21:43.367 [2024-07-22 11:18:48.464262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.464291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.473747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e73e0 01:21:43.368 [2024-07-22 11:18:48.475473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.475500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.484295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f5be8 01:21:43.368 [2024-07-22 11:18:48.485615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.485645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.494681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e12d8 01:21:43.368 [2024-07-22 11:18:48.495719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.495766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.505540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fef90 01:21:43.368 [2024-07-22 11:18:48.506273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.506318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.516752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fef90 01:21:43.368 [2024-07-22 11:18:48.517577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.517639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.528240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ecc78 01:21:43.368 [2024-07-22 11:18:48.528977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.529046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.541602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190df988 01:21:43.368 [2024-07-22 11:18:48.543084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.543115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.553572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fd640 01:21:43.368 [2024-07-22 11:18:48.555000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.555062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.564681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ec408 01:21:43.368 [2024-07-22 11:18:48.566057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.566100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:21:43.368 [2024-07-22 11:18:48.572200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e5220 01:21:43.368 [2024-07-22 11:18:48.572917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.368 [2024-07-22 11:18:48.573004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.584738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ecc78 01:21:43.626 [2024-07-22 11:18:48.585967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.586023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.594356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e5a90 01:21:43.626 [2024-07-22 11:18:48.595347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.595379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.604674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190eea00 01:21:43.626 [2024-07-22 11:18:48.605518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.605563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.616188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190eea00 01:21:43.626 [2024-07-22 11:18:48.616925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.616995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.627546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e9e10 01:21:43.626 [2024-07-22 11:18:48.628454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.628484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.640795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f2510 01:21:43.626 [2024-07-22 11:18:48.642122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.642167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.651749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ebb98 01:21:43.626 [2024-07-22 11:18:48.653086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.653131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.662342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190eea00 01:21:43.626 [2024-07-22 11:18:48.663567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.663597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.671123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e6fa8 01:21:43.626 [2024-07-22 11:18:48.672506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.672537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.682425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fef90 01:21:43.626 [2024-07-22 11:18:48.684143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.684173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.694126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7da8 01:21:43.626 [2024-07-22 11:18:48.695380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.695411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.706149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7da8 01:21:43.626 [2024-07-22 11:18:48.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.707377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.718007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7da8 01:21:43.626 [2024-07-22 11:18:48.719244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.719273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.731274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7da8 01:21:43.626 [2024-07-22 11:18:48.733229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.733275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.741803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e5220 01:21:43.626 [2024-07-22 11:18:48.743152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.743198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.752853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ef6a8 01:21:43.626 [2024-07-22 11:18:48.754194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.754239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.764654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fbcf0 01:21:43.626 [2024-07-22 11:18:48.766071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.766100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.776311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e4578 01:21:43.626 [2024-07-22 11:18:48.777856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.777885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.787836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1ca0 01:21:43.626 [2024-07-22 11:18:48.789142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.626 [2024-07-22 11:18:48.789188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:43.626 [2024-07-22 11:18:48.799929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1ca0 01:21:43.626 [2024-07-22 11:18:48.801216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.627 [2024-07-22 11:18:48.801261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:43.627 [2024-07-22 11:18:48.810933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e01f8 01:21:43.627 [2024-07-22 11:18:48.812081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.627 [2024-07-22 11:18:48.812110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:21:43.627 [2024-07-22 11:18:48.821543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190feb58 01:21:43.627 [2024-07-22 11:18:48.822501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.627 [2024-07-22 11:18:48.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.834844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e4140 01:21:43.885 [2024-07-22 11:18:48.836455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.836501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.844840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fa7d8 01:21:43.885 [2024-07-22 11:18:48.846588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.846620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.857247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f31b8 01:21:43.885 [2024-07-22 11:18:48.858719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.858751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.869788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f96f8 01:21:43.885 [2024-07-22 11:18:48.871548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.871577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.877737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc560 01:21:43.885 [2024-07-22 11:18:48.878689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.878718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.890341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3060 01:21:43.885 [2024-07-22 11:18:48.891476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.891505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.901775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ec840 01:21:43.885 [2024-07-22 11:18:48.902572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.902634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.911377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fef90 01:21:43.885 [2024-07-22 11:18:48.912147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.912194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.923187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ef6a8 01:21:43.885 [2024-07-22 11:18:48.924364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.924393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.931558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1430 01:21:43.885 [2024-07-22 11:18:48.932591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.932621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.942321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f2948 01:21:43.885 [2024-07-22 11:18:48.943794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.943843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.952303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e95a0 01:21:43.885 [2024-07-22 11:18:48.953275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.953305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.963807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f5378 01:21:43.885 [2024-07-22 11:18:48.964981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.965038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.974674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f2948 01:21:43.885 [2024-07-22 11:18:48.975816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.975864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.986882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e9168 01:21:43.885 [2024-07-22 11:18:48.988419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:48.988475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:48.999342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e73e0 01:21:43.885 [2024-07-22 11:18:49.000903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.000933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.006429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f0bc0 01:21:43.885 [2024-07-22 11:18:49.006994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.007037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.017526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f5be8 01:21:43.885 [2024-07-22 11:18:49.018573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.018603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.025920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190df550 01:21:43.885 [2024-07-22 11:18:49.027156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.027187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.036741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e6300 01:21:43.885 [2024-07-22 11:18:49.037830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.037860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.048867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ea680 01:21:43.885 [2024-07-22 11:18:49.050045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.050106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.063227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190de8a8 01:21:43.885 [2024-07-22 11:18:49.065156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.065191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.074007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f20d8 01:21:43.885 [2024-07-22 11:18:49.075352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.075392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:21:43.885 [2024-07-22 11:18:49.085629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7538 01:21:43.885 [2024-07-22 11:18:49.086817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:43.885 [2024-07-22 11:18:49.086847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.097738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc560 01:21:44.144 [2024-07-22 11:18:49.099106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.099151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.108676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ddc00 01:21:44.144 [2024-07-22 11:18:49.109810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.109841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.119258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7538 01:21:44.144 [2024-07-22 11:18:49.120550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.120579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.130976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f92c0 01:21:44.144 [2024-07-22 11:18:49.131912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.131983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.142881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fd208 01:21:44.144 [2024-07-22 11:18:49.143808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.143855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.154476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fa7d8 01:21:44.144 [2024-07-22 11:18:49.155266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.155330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.167136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f0bc0 01:21:44.144 [2024-07-22 11:18:49.168408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.168449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.178608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f0bc0 01:21:44.144 [2024-07-22 11:18:49.179833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.179865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.190130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc560 01:21:44.144 [2024-07-22 11:18:49.191193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.191222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.201458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc560 01:21:44.144 [2024-07-22 11:18:49.202628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.202674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.211881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f0788 01:21:44.144 [2024-07-22 11:18:49.213673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.213704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.223920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190feb58 01:21:44.144 [2024-07-22 11:18:49.224829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.224859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.236085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7970 01:21:44.144 [2024-07-22 11:18:49.237303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.237349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.248176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7970 01:21:44.144 [2024-07-22 11:18:49.249245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.249275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.259114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7970 01:21:44.144 [2024-07-22 11:18:49.260284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.260331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.269350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f20d8 01:21:44.144 [2024-07-22 11:18:49.270329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.270360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.279188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ecc78 01:21:44.144 [2024-07-22 11:18:49.280265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.144 [2024-07-22 11:18:49.280294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:21:44.144 [2024-07-22 11:18:49.290466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7970 01:21:44.144 [2024-07-22 11:18:49.291446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.145 [2024-07-22 11:18:49.291477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:44.145 [2024-07-22 11:18:49.301272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ecc78 01:21:44.145 [2024-07-22 11:18:49.302622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.145 [2024-07-22 11:18:49.302653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:21:44.145 [2024-07-22 11:18:49.312186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3498 01:21:44.145 [2024-07-22 11:18:49.313336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.145 [2024-07-22 11:18:49.313396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:44.145 [2024-07-22 11:18:49.323874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3498 01:21:44.145 [2024-07-22 11:18:49.324990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.145 [2024-07-22 11:18:49.325043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:44.145 [2024-07-22 11:18:49.335549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3498 01:21:44.145 [2024-07-22 11:18:49.336783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.145 [2024-07-22 11:18:49.336813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:21:44.145 [2024-07-22 11:18:49.346098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e1710 01:21:44.145 [2024-07-22 11:18:49.347226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.145 [2024-07-22 11:18:49.347272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:21:44.403 [2024-07-22 11:18:49.356439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e1f80 01:21:44.403 [2024-07-22 11:18:49.357441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.403 [2024-07-22 11:18:49.357471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:21:44.403 [2024-07-22 11:18:49.366987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f5be8 01:21:44.404 [2024-07-22 11:18:49.368237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.368268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.377366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ff3c8 01:21:44.404 [2024-07-22 11:18:49.378710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.378741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.388631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fbcf0 01:21:44.404 [2024-07-22 11:18:49.389848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.389878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.400062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fd640 01:21:44.404 [2024-07-22 11:18:49.401269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.401315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.410901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc128 01:21:44.404 [2024-07-22 11:18:49.411956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.412045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.422480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc128 01:21:44.404 [2024-07-22 11:18:49.423390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.423461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.433960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc128 01:21:44.404 [2024-07-22 11:18:49.434919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.434972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.444380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e7818 01:21:44.404 [2024-07-22 11:18:49.445309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.445356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.457327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f35f0 01:21:44.404 [2024-07-22 11:18:49.458586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.458631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.467520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f8618 01:21:44.404 [2024-07-22 11:18:49.468489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.468537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.480833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f3a28 01:21:44.404 [2024-07-22 11:18:49.482334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.482390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.492674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f46d0 01:21:44.404 [2024-07-22 11:18:49.494575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.494628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.501508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f2d80 01:21:44.404 [2024-07-22 11:18:49.502259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.502306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.513583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f7100 01:21:44.404 [2024-07-22 11:18:49.514390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.514458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.526684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190de8a8 01:21:44.404 [2024-07-22 11:18:49.528237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.528285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.538415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190de038 01:21:44.404 [2024-07-22 11:18:49.539927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.540003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.549508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190dfdc0 01:21:44.404 [2024-07-22 11:18:49.550657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.550705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.559150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e38d0 01:21:44.404 [2024-07-22 11:18:49.560354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.560408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.568474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e84c0 01:21:44.404 [2024-07-22 11:18:49.569650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.569696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.578357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fdeb0 01:21:44.404 [2024-07-22 11:18:49.579191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.579240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.589246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e84c0 01:21:44.404 [2024-07-22 11:18:49.590310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.590358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:21:44.404 [2024-07-22 11:18:49.601624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fe2e8 01:21:44.404 [2024-07-22 11:18:49.602828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.404 [2024-07-22 11:18:49.602875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.612755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190e3498 01:21:44.663 [2024-07-22 11:18:49.613786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.613834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.626359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f92c0 01:21:44.663 [2024-07-22 11:18:49.627621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.627723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.637976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ef270 01:21:44.663 [2024-07-22 11:18:49.639313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.639360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.649715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1ca0 01:21:44.663 [2024-07-22 11:18:49.650884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.650931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.662487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1ca0 01:21:44.663 [2024-07-22 11:18:49.664375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.664431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.670557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ea680 01:21:44.663 [2024-07-22 11:18:49.671270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.671330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.683804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f31b8 01:21:44.663 [2024-07-22 11:18:49.685121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.685168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.695533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190f1430 01:21:44.663 [2024-07-22 11:18:49.696858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.696904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.707903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190ebfd0 01:21:44.663 [2024-07-22 11:18:49.709346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.709388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:21:44.663 [2024-07-22 11:18:49.718113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c660d0) with pdu=0x2000190fc128 01:21:44.663 [2024-07-22 11:18:49.718665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:21:44.663 [2024-07-22 11:18:49.718713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:21:44.663 01:21:44.663 Latency(us) 01:21:44.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:44.663 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:21:44.663 nvme0n1 : 2.00 22535.02 88.03 0.00 0.00 5671.07 1936.29 16324.42 01:21:44.663 =================================================================================================================== 01:21:44.663 Total : 22535.02 88.03 0.00 0.00 5671.07 1936.29 16324.42 01:21:44.663 0 01:21:44.663 11:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:21:44.663 11:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:21:44.663 11:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:21:44.664 | .driver_specific 01:21:44.664 | .nvme_error 01:21:44.664 | .status_code 01:21:44.664 | .command_transient_transport_error' 01:21:44.664 11:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 177 > 0 )) 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112652 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112652 ']' 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112652 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112652 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:44.923 killing process with pid 112652 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112652' 01:21:44.923 Received shutdown signal, test time was about 2.000000 seconds 01:21:44.923 01:21:44.923 Latency(us) 01:21:44.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:44.923 =================================================================================================================== 01:21:44.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112652 01:21:44.923 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112652 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112729 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112729 /var/tmp/bperf.sock 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112729 ']' 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:45.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:45.182 11:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:45.182 [2024-07-22 11:18:50.277889] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:45.182 [2024-07-22 11:18:50.278016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112729 ] 01:21:45.182 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:45.182 Zero copy mechanism will not be used. 01:21:45.440 [2024-07-22 11:18:50.406701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:45.440 [2024-07-22 11:18:50.471175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:46.375 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:21:46.633 nvme0n1 01:21:46.893 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:21:46.893 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:46.893 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:46.893 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:46.893 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:21:46.893 11:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:21:46.893 I/O size of 131072 is greater than zero copy threshold (65536). 01:21:46.893 Zero copy mechanism will not be used. 01:21:46.893 Running I/O for 2 seconds... 01:21:46.893 [2024-07-22 11:18:51.951684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.952074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.952117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.957635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.957903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.957994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.963760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.964128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.964162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.969387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.969769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.969811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.975152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.975526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.975563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.981516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.981815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.981862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.987618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.987952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.988026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.993658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.993954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.994021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:51.999450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:51.999793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:51.999826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:52.005319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:52.005700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:52.005736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:52.011157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:52.011501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:52.011536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:52.016957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:52.017320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:52.017368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:52.022689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:52.023028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:52.023071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:52.028687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:52.029025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:52.029073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:52.034454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.893 [2024-07-22 11:18:52.034764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.893 [2024-07-22 11:18:52.034812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:46.893 [2024-07-22 11:18:52.040297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.040651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.040690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.045870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.046185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.046233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.051767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.052145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.052180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.057614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.057927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.057981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.063870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.064267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.064315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.069693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.070028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.070074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.075496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.075880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.075920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.081440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.081787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.081820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.087340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.087763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.087797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.093149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.093475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.093516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:46.894 [2024-07-22 11:18:52.098902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:46.894 [2024-07-22 11:18:52.099318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:46.894 [2024-07-22 11:18:52.099357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.104895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.105204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.105275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.110615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.110994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.116433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.116762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.116796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.121840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.122163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.122220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.127191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.127503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.127550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.132297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.132627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.132662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.137313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.137613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.137661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.142228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.142542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.142590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.147081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.147395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.147445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.152056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.152416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.152448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.157715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.158066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.158097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.163141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.163464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.163496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.168740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.169074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.169105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.174523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.174890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.174929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.180531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.180858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.180908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.186133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.186441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.186488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.191738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.192137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.192175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.197534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.154 [2024-07-22 11:18:52.197862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.154 [2024-07-22 11:18:52.197895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.154 [2024-07-22 11:18:52.203541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.203883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.203908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.208865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.209223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.209258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.214175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.214488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.214536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.219247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.219611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.224247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.224538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.224602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.229108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.229397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.229444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.234007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.234308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.234356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.238843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.239152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.239200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.243918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.244270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.244308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.249932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.250296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.250336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.255540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.255896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.255932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.261489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.261822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.261854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.267504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.267853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.267907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.273652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.273964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.274048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.279568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.279909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.279953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.285471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.285799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.285845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.291412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.291718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.291750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.296810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.297143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.297177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.302342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.302652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.302698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.307864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.308258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.308303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.313531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.313826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.313873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.319336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.319729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.319765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.325013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.325357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.325407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.330772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.331072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.331117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.336509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.336790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.336852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.342279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.342590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.342637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.347930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.348344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.353689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.353993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.354067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.155 [2024-07-22 11:18:52.359760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.155 [2024-07-22 11:18:52.360137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.155 [2024-07-22 11:18:52.360199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.365530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.365855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.365890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.371245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.371515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.371576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.376978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.377344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.377383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.382776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.383054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.383123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.388597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.388937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.388979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.394153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.394455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.394510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.399827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.400199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.400235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.405727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.406096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.406129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.411455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.411793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.411841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.417107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.417459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.417496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.422608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.422939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.422995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.427920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.428298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.428359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.433197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.433496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.433543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.438356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.438670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.438701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.416 [2024-07-22 11:18:52.443289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.416 [2024-07-22 11:18:52.443584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.416 [2024-07-22 11:18:52.443632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.448509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.448844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.448876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.454268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.454550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.454608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.459784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.460146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.465285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.465634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.465675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.470849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.471172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.471219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.476641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.476934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.476992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.482430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.482709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.482772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.488421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.488703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.488765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.494191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.494477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.494539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.499697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.500027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.500069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.505095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.505410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.505444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.510419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.510713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.510759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.515509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.515871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.515904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.520644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.520926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.525601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.525895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.525944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.530540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.530819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.530880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.535325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.535605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.535706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.540945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.541318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.541350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.546605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.546884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.546946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.552126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.552472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.552509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.557621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.557886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.557971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.563012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.563373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.563414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.568932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.569304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.569341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.574680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.574976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.575068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.580394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.580643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.580719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.586195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.586517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.586553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.591858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.592246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.592286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.597404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.597684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.597746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.602861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.417 [2024-07-22 11:18:52.603169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.417 [2024-07-22 11:18:52.603219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.417 [2024-07-22 11:18:52.608474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.418 [2024-07-22 11:18:52.608771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.418 [2024-07-22 11:18:52.608818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.418 [2024-07-22 11:18:52.613968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.418 [2024-07-22 11:18:52.614330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.418 [2024-07-22 11:18:52.614393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.418 [2024-07-22 11:18:52.619705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.418 [2024-07-22 11:18:52.620028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.418 [2024-07-22 11:18:52.620075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.625560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.625886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.625920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.631430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.631791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.631838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.637168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.637467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.637513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.642757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.643063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.643116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.648692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.648999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.649087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.654934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.655322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.655390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.660797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.661158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.661193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.666564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.666857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.666903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.672436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.672698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.672774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.678458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.678786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.678853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.678 [2024-07-22 11:18:52.684549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.678 [2024-07-22 11:18:52.684841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.678 [2024-07-22 11:18:52.684903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.690144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.690462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.690497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.695806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.696182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.696218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.701566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.701846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.701909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.707181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.707517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.707546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.712548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.712832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.712879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.717732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.717982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.718065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.722573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.722869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.722918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.727448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.727803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.727840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.732401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.732713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.732744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.737240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.737568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.737602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.742675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.742969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.743020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.748722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.749052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.749087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.754666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.754930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.754981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.760528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.760855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.760895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.766321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.766718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.766754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.772431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.772745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.772807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.778125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.778462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.778493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.783834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.784212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.784249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.789544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.789824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.789886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.795475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.795830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.795863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.801046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.801373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.801412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.806819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.807149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.807180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.812667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.812963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.813012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.818127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.818457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.818489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.823604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.823945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.824021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.829205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.829568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.829590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.834749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.835072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.835134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.840702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.840981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.841054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.679 [2024-07-22 11:18:52.846490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.679 [2024-07-22 11:18:52.846784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.679 [2024-07-22 11:18:52.846831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.680 [2024-07-22 11:18:52.852393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.680 [2024-07-22 11:18:52.852760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.680 [2024-07-22 11:18:52.852799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.680 [2024-07-22 11:18:52.858071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.680 [2024-07-22 11:18:52.858391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.680 [2024-07-22 11:18:52.858438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.680 [2024-07-22 11:18:52.863935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.680 [2024-07-22 11:18:52.864295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.680 [2024-07-22 11:18:52.864344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.680 [2024-07-22 11:18:52.869563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.680 [2024-07-22 11:18:52.869879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.680 [2024-07-22 11:18:52.869918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.680 [2024-07-22 11:18:52.875266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.680 [2024-07-22 11:18:52.875564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.680 [2024-07-22 11:18:52.875625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.680 [2024-07-22 11:18:52.880967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.680 [2024-07-22 11:18:52.881322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.680 [2024-07-22 11:18:52.881365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.886767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.887071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.887092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.892435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.892699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.892777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.897930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.898284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.898330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.903515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.903872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.903906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.909170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.909507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.909540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.914803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.915127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.915174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.920576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.920888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.920928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.926220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.926565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.926598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.931894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.932302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.932337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.937592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.937884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.937930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.943567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.943 [2024-07-22 11:18:52.943935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.943 [2024-07-22 11:18:52.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.943 [2024-07-22 11:18:52.949538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.949847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.949878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.955434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.955812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.955836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.961211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.961500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.961562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.966683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.966961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.967032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.972179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.972447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.972524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.977556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.977839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.977901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.982836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.983184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.983218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.988268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.988653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.988706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.994203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:52.994510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:52.994557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:52.999827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:53.000190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:53.000226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:53.004857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:53.005150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:53.005213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:53.010028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:53.010283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:53.010359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:53.014758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:53.015096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:53.015128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:53.019813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:53.020206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:53.020243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:53.024770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:53.025075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.944 [2024-07-22 11:18:53.025121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.944 [2024-07-22 11:18:53.029688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.944 [2024-07-22 11:18:53.029999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.030039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.034612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.034909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.034966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.039508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.039870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.039902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.044439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.044750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.044783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.049349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.049663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.049694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.054226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.054523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.054570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.059091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.059444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.059490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.064424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.064707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.064753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.069711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.070042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.070072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.075422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.075767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.075802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.080756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.081100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.081131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.085997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.086322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.086353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.091786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.092177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.092222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.097559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.097888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.945 [2024-07-22 11:18:53.097919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.945 [2024-07-22 11:18:53.102854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.945 [2024-07-22 11:18:53.103184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.103217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.108571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.108868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.108919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.114693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.115041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.115083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.120483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.120776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.120823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.125701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.126012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.126054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.130759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.131081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.131111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.135633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.135987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.136037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.140622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.140917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.140988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:47.946 [2024-07-22 11:18:53.145696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:47.946 [2024-07-22 11:18:53.145984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:47.946 [2024-07-22 11:18:53.146041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.150627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.150937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.150996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.155530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.155884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.155918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.160753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.161098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.161129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.166374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.166638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.166664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.171996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.172278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.172303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.177347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.177647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.177672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.182801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.183079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.183128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.188449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.188727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.188753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.194228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.194484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.194508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.199686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.199958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.200029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.205421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.205668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.205692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.210857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.211119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.211139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.216276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.216570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.216589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.221625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.221888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.221927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.227417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.227721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.227749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.233258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.233537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.233562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.238768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.239046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.239071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.244064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.244316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.207 [2024-07-22 11:18:53.244340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.207 [2024-07-22 11:18:53.249190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.207 [2024-07-22 11:18:53.249452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.249485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.254759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.255086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.255112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.260698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.260964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.261014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.266173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.271608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.271931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.271991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.277222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.277532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.277559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.283281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.283625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.283691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.288936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.289285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.289335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.295012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.295346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.295389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.300936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.301304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.301370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.306489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.306752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.306777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.311852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.312194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.312268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.317090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.317364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.317389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.322407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.322671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.322697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.327569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.327888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.327916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.332637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.332963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.333012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.337705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.338015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.338040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.342752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.343034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.343059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.348109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.348379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.348420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.353583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.353845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.353872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.359186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.359460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.359502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.364748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.365011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.365045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.369771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.370080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.370107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.374813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.375089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.375115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.380150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.380432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.380457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.385202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.385454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.385479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.390342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.390635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.390660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.395413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.395717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.395745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.400523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.400788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.208 [2024-07-22 11:18:53.400815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.208 [2024-07-22 11:18:53.405527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.208 [2024-07-22 11:18:53.405845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.209 [2024-07-22 11:18:53.405889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.209 [2024-07-22 11:18:53.410936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.209 [2024-07-22 11:18:53.411254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.209 [2024-07-22 11:18:53.411294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.416291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.416581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.416607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.421499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.421761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.421786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.426468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.426732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.426758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.431738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.432108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.432154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.437059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.437323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.437349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.442132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.442421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.442450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.447269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.447531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.447557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.452156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.452418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.452444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.457246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.457508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.457533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.462441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.462704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.462729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.467 [2024-07-22 11:18:53.467338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.467 [2024-07-22 11:18:53.467598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.467 [2024-07-22 11:18:53.467618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.472229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.472493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.472519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.477461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.477724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.477750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.483222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.483479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.483504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.488803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.489077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.489102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.494514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.494784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.494811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.499935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.500276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.500302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.505569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.505836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.505861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.511157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.511426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.511451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.516690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.516991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.522402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.522668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.522696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.528085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.528353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.528394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.533641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.533908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.533934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.539226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.539516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.539541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.544702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.544955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.544991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.550181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.550459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.550485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.555612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.555911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.555937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.561282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.561549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.561573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.566837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.567101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.567125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.572581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.572834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.572858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.578166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.578465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.578490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.583709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.584016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.584052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.589251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.589501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.589520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.594649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.594897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.594922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.600204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.600457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.600493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.605642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.605892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.605917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.611260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.611511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.611536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.616865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.617129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.617153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.622247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.622520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.622546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.627722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.468 [2024-07-22 11:18:53.628038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.468 [2024-07-22 11:18:53.628065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.468 [2024-07-22 11:18:53.633321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.633598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.633623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.469 [2024-07-22 11:18:53.638744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.639017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.639042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.469 [2024-07-22 11:18:53.644354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.644621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.644645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.469 [2024-07-22 11:18:53.649884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.650205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.650235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.469 [2024-07-22 11:18:53.655452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.655760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.655786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.469 [2024-07-22 11:18:53.660929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.661204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.661230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.469 [2024-07-22 11:18:53.666463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.469 [2024-07-22 11:18:53.672261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.469 [2024-07-22 11:18:53.672597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.469 [2024-07-22 11:18:53.672624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.678312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.678672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.678729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.684309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.684586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.684612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.689997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.690362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.690400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.695687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.695959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.696018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.701417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.701667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.701691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.707115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.707374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.707399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.712844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.713104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.713128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.718503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.718754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.718774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.724197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.724477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.724501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.729838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.730130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.730154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.735449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.735750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.735777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.741105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.741355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.741379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.746672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.746924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.746949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.752280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.752549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.752576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.757824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.758127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.758153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.728 [2024-07-22 11:18:53.763654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.728 [2024-07-22 11:18:53.763943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.728 [2024-07-22 11:18:53.764015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.769491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.769744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.769769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.775150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.775403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.775428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.780913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.781178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.781207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.786386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.786638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.786663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.792043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.792296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.798026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.798350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.798398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.803989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.804312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.804342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.809690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.809943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.809992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.815251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.815505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.815529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.820887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.821150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.821175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.826542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.826792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.826816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.832259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.832586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.832611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.837910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.838223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.838285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.843587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.843884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.843910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.849249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.849502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.849535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.854856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.855121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.855146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.860479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.860732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.860757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.866090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.866364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.866419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.871528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.871859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.871886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.877396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.877648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.877667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.883051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.883320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.883354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.888859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.889135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.889155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.894421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.894701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.894725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.900088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.900330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.900363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.905592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.905858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.905883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.911217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.911471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.911495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.916814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.917092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.917112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.922515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.729 [2024-07-22 11:18:53.922766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.729 [2024-07-22 11:18:53.922786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:21:48.729 [2024-07-22 11:18:53.927955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.730 [2024-07-22 11:18:53.928318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.730 [2024-07-22 11:18:53.928349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:21:48.730 [2024-07-22 11:18:53.933760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.730 [2024-07-22 11:18:53.934079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.987 [2024-07-22 11:18:53.934106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:21:48.987 [2024-07-22 11:18:53.939560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c66270) with pdu=0x2000190fef90 01:21:48.987 [2024-07-22 11:18:53.939898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:21:48.987 [2024-07-22 11:18:53.939926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:21:48.987 01:21:48.987 Latency(us) 01:21:48.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:48.987 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:21:48.987 nvme0n1 : 2.00 5573.33 696.67 0.00 0.00 2865.50 2070.34 6613.18 01:21:48.987 =================================================================================================================== 01:21:48.987 Total : 5573.33 696.67 0.00 0.00 2865.50 2070.34 6613.18 01:21:48.987 0 01:21:48.987 11:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:21:48.987 11:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:21:48.987 11:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:21:48.987 11:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:21:48.987 | .driver_specific 01:21:48.987 | .nvme_error 01:21:48.987 | .status_code 01:21:48.987 | .command_transient_transport_error' 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 359 > 0 )) 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112729 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112729 ']' 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112729 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112729 01:21:49.245 killing process with pid 112729 01:21:49.245 Received shutdown signal, test time was about 2.000000 seconds 01:21:49.245 01:21:49.245 Latency(us) 01:21:49.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:21:49.245 =================================================================================================================== 01:21:49.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112729' 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112729 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112729 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 112438 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112438 ']' 01:21:49.245 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112438 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112438 01:21:49.503 killing process with pid 112438 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112438' 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112438 01:21:49.503 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112438 01:21:49.761 ************************************ 01:21:49.761 END TEST nvmf_digest_error 01:21:49.761 ************************************ 01:21:49.761 01:21:49.761 real 0m17.354s 01:21:49.761 user 0m31.621s 01:21:49.761 sys 0m5.168s 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:21:49.761 rmmod nvme_tcp 01:21:49.761 rmmod nvme_fabrics 01:21:49.761 rmmod nvme_keyring 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 112438 ']' 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 112438 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 112438 ']' 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 112438 01:21:49.761 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (112438) - No such process 01:21:49.761 Process with pid 112438 is not found 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 112438 is not found' 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:21:49.761 01:21:49.761 real 0m34.905s 01:21:49.761 user 1m1.860s 01:21:49.761 sys 0m10.763s 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 01:21:49.761 11:18:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:21:49.761 ************************************ 01:21:49.761 END TEST nvmf_digest 01:21:49.761 ************************************ 01:21:50.020 11:18:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:21:50.020 11:18:54 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 01:21:50.020 11:18:54 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 01:21:50.020 11:18:54 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:21:50.020 11:18:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:21:50.020 11:18:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:21:50.020 11:18:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:21:50.020 ************************************ 01:21:50.020 START TEST nvmf_mdns_discovery 01:21:50.020 ************************************ 01:21:50.020 11:18:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:21:50.020 * Looking for test storage... 01:21:50.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:50.020 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:21:50.021 Cannot find device "nvmf_tgt_br" 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:21:50.021 Cannot find device "nvmf_tgt_br2" 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:21:50.021 Cannot find device "nvmf_tgt_br" 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:21:50.021 Cannot find device "nvmf_tgt_br2" 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:21:50.021 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:50.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:50.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:21:50.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:50.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 01:21:50.279 01:21:50.279 --- 10.0.0.2 ping statistics --- 01:21:50.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:50.279 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:21:50.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:50.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 01:21:50.279 01:21:50.279 --- 10.0.0.3 ping statistics --- 01:21:50.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:50.279 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:50.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:50.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 01:21:50.279 01:21:50.279 --- 10.0.0.1 ping statistics --- 01:21:50.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:50.279 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=113021 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 113021 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 113021 ']' 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:50.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:50.279 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:50.280 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:50.280 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:50.280 11:18:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:50.538 [2024-07-22 11:18:55.509759] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:50.538 [2024-07-22 11:18:55.509841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:50.538 [2024-07-22 11:18:55.651698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:50.538 [2024-07-22 11:18:55.726814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:50.538 [2024-07-22 11:18:55.726872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:50.538 [2024-07-22 11:18:55.726886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:50.538 [2024-07-22 11:18:55.726896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:50.538 [2024-07-22 11:18:55.726905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:50.538 [2024-07-22 11:18:55.726942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 [2024-07-22 11:18:56.647270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 [2024-07-22 11:18:56.655358] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 null0 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 null1 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 null2 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 null3 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=113071 01:21:51.495 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:21:51.753 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 113071 /tmp/host.sock 01:21:51.753 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 113071 ']' 01:21:51.753 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 01:21:51.753 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 01:21:51.753 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:21:51.753 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:21:51.753 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 01:21:51.753 11:18:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:51.753 [2024-07-22 11:18:56.751135] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:21:51.753 [2024-07-22 11:18:56.751229] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113071 ] 01:21:51.753 [2024-07-22 11:18:56.888126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:52.012 [2024-07-22 11:18:56.961013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:21:52.578 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:21:52.578 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 01:21:52.578 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 01:21:52.578 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 01:21:52.578 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 01:21:52.837 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=113100 01:21:52.837 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 01:21:52.837 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 01:21:52.837 11:18:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 01:21:52.837 Process 984 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 01:21:52.837 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 01:21:52.837 Successfully dropped root privileges. 01:21:52.837 avahi-daemon 0.8 starting up. 01:21:52.837 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 01:21:52.837 Successfully called chroot(). 01:21:52.837 Successfully dropped remaining capabilities. 01:21:53.773 No service file found in /etc/avahi/services. 01:21:53.773 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 01:21:53.773 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 01:21:53.773 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 01:21:53.773 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 01:21:53.773 Network interface enumeration completed. 01:21:53.773 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 01:21:53.773 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 01:21:53.773 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 01:21:53.773 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 01:21:53.773 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 4017389417. 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:53.773 11:18:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 01:21:54.032 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.033 [2024-07-22 11:18:59.185186] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.033 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.290 [2024-07-22 11:18:59.240024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.290 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.291 [2024-07-22 11:18:59.279944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.291 [2024-07-22 11:18:59.287945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:54.291 11:18:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 01:21:55.224 [2024-07-22 11:19:00.085190] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:21:55.482 [2024-07-22 11:19:00.685210] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:21:55.482 [2024-07-22 11:19:00.685314] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:21:55.482 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:21:55.482 cookie is 0 01:21:55.482 is_local: 1 01:21:55.482 our_own: 0 01:21:55.482 wide_area: 0 01:21:55.482 multicast: 1 01:21:55.482 cached: 1 01:21:55.740 [2024-07-22 11:19:00.785193] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:21:55.740 [2024-07-22 11:19:00.785220] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:21:55.740 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:21:55.740 cookie is 0 01:21:55.740 is_local: 1 01:21:55.740 our_own: 0 01:21:55.740 wide_area: 0 01:21:55.740 multicast: 1 01:21:55.740 cached: 1 01:21:55.740 [2024-07-22 11:19:00.785247] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:21:55.740 [2024-07-22 11:19:00.885191] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:21:55.740 [2024-07-22 11:19:00.885214] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:21:55.740 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:21:55.740 cookie is 0 01:21:55.740 is_local: 1 01:21:55.740 our_own: 0 01:21:55.740 wide_area: 0 01:21:55.740 multicast: 1 01:21:55.740 cached: 1 01:21:55.998 [2024-07-22 11:19:00.985192] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:21:55.998 [2024-07-22 11:19:00.985216] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:21:55.998 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:21:55.998 cookie is 0 01:21:55.998 is_local: 1 01:21:55.998 our_own: 0 01:21:55.998 wide_area: 0 01:21:55.998 multicast: 1 01:21:55.998 cached: 1 01:21:55.998 [2024-07-22 11:19:00.985225] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 01:21:56.566 [2024-07-22 11:19:01.692100] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:21:56.566 [2024-07-22 11:19:01.692148] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:21:56.566 [2024-07-22 11:19:01.692176] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:21:56.825 [2024-07-22 11:19:01.778367] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 01:21:56.825 [2024-07-22 11:19:01.836221] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:21:56.825 [2024-07-22 11:19:01.836276] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 01:21:56.825 [2024-07-22 11:19:01.891539] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:21:56.825 [2024-07-22 11:19:01.891562] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:21:56.825 [2024-07-22 11:19:01.891580] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:21:56.825 [2024-07-22 11:19:01.977654] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 01:21:57.084 [2024-07-22 11:19:02.033693] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 01:21:57.084 [2024-07-22 11:19:02.033733] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:21:59.613 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:21:59.614 11:19:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 01:22:00.544 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 01:22:00.544 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:22:00.544 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:22:00.544 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:00.544 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:00.544 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:22:00.544 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:00.801 [2024-07-22 11:19:05.814564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:22:00.801 [2024-07-22 11:19:05.815296] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:22:00.801 [2024-07-22 11:19:05.815366] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:22:00.801 [2024-07-22 11:19:05.815403] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:22:00.801 [2024-07-22 11:19:05.815417] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:00.801 [2024-07-22 11:19:05.822470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:22:00.801 [2024-07-22 11:19:05.823268] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:22:00.801 [2024-07-22 11:19:05.823328] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:00.801 11:19:05 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 01:22:00.801 [2024-07-22 11:19:05.954443] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 01:22:00.801 [2024-07-22 11:19:05.954651] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 01:22:01.058 [2024-07-22 11:19:06.015767] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:22:01.058 [2024-07-22 11:19:06.015794] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 01:22:01.058 [2024-07-22 11:19:06.015817] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:22:01.058 [2024-07-22 11:19:06.015836] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:22:01.058 [2024-07-22 11:19:06.015877] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 01:22:01.058 [2024-07-22 11:19:06.015886] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:22:01.058 [2024-07-22 11:19:06.015892] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:22:01.059 [2024-07-22 11:19:06.015905] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:22:01.059 [2024-07-22 11:19:06.061527] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 01:22:01.059 [2024-07-22 11:19:06.061549] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:22:01.059 [2024-07-22 11:19:06.061604] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 01:22:01.059 [2024-07-22 11:19:06.061612] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:22:01.992 11:19:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:01.992 [2024-07-22 11:19:07.135165] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:22:01.992 [2024-07-22 11:19:07.135218] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:22:01.992 [2024-07-22 11:19:07.135255] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:22:01.992 [2024-07-22 11:19:07.135268] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:01.992 [2024-07-22 11:19:07.142553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.142606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.142619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.142628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.142637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.142646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.142655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.142664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.142672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:01.992 [2024-07-22 11:19:07.143185] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:22:01.992 [2024-07-22 11:19:07.143242] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 01:22:01.992 [2024-07-22 11:19:07.145803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.145848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.145859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.145867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.145876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.145884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.145893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:22:01.992 [2024-07-22 11:19:07.145900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:22:01.992 [2024-07-22 11:19:07.145908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:01.992 11:19:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 01:22:01.992 [2024-07-22 11:19:07.152506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:01.992 [2024-07-22 11:19:07.155775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:01.992 [2024-07-22 11:19:07.162526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:01.993 [2024-07-22 11:19:07.162670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.162690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:01.993 [2024-07-22 11:19:07.162700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.162715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.162728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.162736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.162746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:01.993 [2024-07-22 11:19:07.162760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:01.993 [2024-07-22 11:19:07.165784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:01.993 [2024-07-22 11:19:07.165877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.165895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:01.993 [2024-07-22 11:19:07.165904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.165918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.165930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.165938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.165946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:01.993 [2024-07-22 11:19:07.165958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:01.993 [2024-07-22 11:19:07.172594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:01.993 [2024-07-22 11:19:07.172680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.172698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:01.993 [2024-07-22 11:19:07.172707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.172720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.172732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.172740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.172747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:01.993 [2024-07-22 11:19:07.172760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:01.993 [2024-07-22 11:19:07.175829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:01.993 [2024-07-22 11:19:07.175935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.175955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:01.993 [2024-07-22 11:19:07.175966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.175994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.176028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.176038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.176047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:01.993 [2024-07-22 11:19:07.176062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:01.993 [2024-07-22 11:19:07.182638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:01.993 [2024-07-22 11:19:07.182738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.182756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:01.993 [2024-07-22 11:19:07.182765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.182778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.182791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.182799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.182806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:01.993 [2024-07-22 11:19:07.182819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:01.993 [2024-07-22 11:19:07.185901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:01.993 [2024-07-22 11:19:07.186026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.186044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:01.993 [2024-07-22 11:19:07.186054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.186068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.186108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.186118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.186126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:01.993 [2024-07-22 11:19:07.186139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:01.993 [2024-07-22 11:19:07.192715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:01.993 [2024-07-22 11:19:07.192806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.192825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:01.993 [2024-07-22 11:19:07.192835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.192849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.192861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.192869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.192876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:01.993 [2024-07-22 11:19:07.192889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:01.993 [2024-07-22 11:19:07.195992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:01.993 [2024-07-22 11:19:07.196174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:01.993 [2024-07-22 11:19:07.196195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:01.993 [2024-07-22 11:19:07.196205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:01.993 [2024-07-22 11:19:07.196238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:01.993 [2024-07-22 11:19:07.196268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:01.993 [2024-07-22 11:19:07.196277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:01.993 [2024-07-22 11:19:07.196285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:01.993 [2024-07-22 11:19:07.196314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.252 [2024-07-22 11:19:07.202780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.252 [2024-07-22 11:19:07.202890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.252 [2024-07-22 11:19:07.202908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.252 [2024-07-22 11:19:07.202918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.252 [2024-07-22 11:19:07.202932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.252 [2024-07-22 11:19:07.202945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.252 [2024-07-22 11:19:07.202953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.252 [2024-07-22 11:19:07.202974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.252 [2024-07-22 11:19:07.202988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.252 [2024-07-22 11:19:07.206103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:02.252 [2024-07-22 11:19:07.206188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.252 [2024-07-22 11:19:07.206206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:02.252 [2024-07-22 11:19:07.206215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:02.252 [2024-07-22 11:19:07.206228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:02.252 [2024-07-22 11:19:07.206240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:02.252 [2024-07-22 11:19:07.206248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:02.252 [2024-07-22 11:19:07.206255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:02.252 [2024-07-22 11:19:07.206268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.252 [2024-07-22 11:19:07.212859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.252 [2024-07-22 11:19:07.212942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.252 [2024-07-22 11:19:07.212960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.252 [2024-07-22 11:19:07.212969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.252 [2024-07-22 11:19:07.212995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.252 [2024-07-22 11:19:07.213008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.252 [2024-07-22 11:19:07.213015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.252 [2024-07-22 11:19:07.213023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.252 [2024-07-22 11:19:07.213035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.252 [2024-07-22 11:19:07.216148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:02.252 [2024-07-22 11:19:07.216234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.252 [2024-07-22 11:19:07.216251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:02.252 [2024-07-22 11:19:07.216260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:02.252 [2024-07-22 11:19:07.216274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:02.252 [2024-07-22 11:19:07.216287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:02.252 [2024-07-22 11:19:07.216294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:02.252 [2024-07-22 11:19:07.216302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:02.252 [2024-07-22 11:19:07.216329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.252 [2024-07-22 11:19:07.222902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.252 [2024-07-22 11:19:07.222994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.252 [2024-07-22 11:19:07.223012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.253 [2024-07-22 11:19:07.223021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.223035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.223047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.223054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.223062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.253 [2024-07-22 11:19:07.223074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.226191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:02.253 [2024-07-22 11:19:07.226274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.226290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:02.253 [2024-07-22 11:19:07.226300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.226313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.226339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.226348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.226355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:02.253 [2024-07-22 11:19:07.226367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.232947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.253 [2024-07-22 11:19:07.233047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.233066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.253 [2024-07-22 11:19:07.233075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.233089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.233101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.233109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.233117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.253 [2024-07-22 11:19:07.233129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.236234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:02.253 [2024-07-22 11:19:07.236339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.236358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:02.253 [2024-07-22 11:19:07.236367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.236381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.236424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.236433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.236440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:02.253 [2024-07-22 11:19:07.236453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.243002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.253 [2024-07-22 11:19:07.243103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.243120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.253 [2024-07-22 11:19:07.243129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.243143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.243155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.243163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.243171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.253 [2024-07-22 11:19:07.243183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.246311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:02.253 [2024-07-22 11:19:07.246394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.246411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:02.253 [2024-07-22 11:19:07.246419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.246432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.246459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.246468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.246475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:02.253 [2024-07-22 11:19:07.246488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.253076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.253 [2024-07-22 11:19:07.253158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.253175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.253 [2024-07-22 11:19:07.253184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.253197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.253209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.253216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.253227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.253 [2024-07-22 11:19:07.253240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.256353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:02.253 [2024-07-22 11:19:07.256453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.256472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:02.253 [2024-07-22 11:19:07.256480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.256494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.256521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.256530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.256537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:02.253 [2024-07-22 11:19:07.256549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.263118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.253 [2024-07-22 11:19:07.263219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.263237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.253 [2024-07-22 11:19:07.263246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.263260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.263272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.263279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.263287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.253 [2024-07-22 11:19:07.263300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.266407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 01:22:02.253 [2024-07-22 11:19:07.266490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.266508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb970 with addr=10.0.0.3, port=4420 01:22:02.253 [2024-07-22 11:19:07.266516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb970 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.266529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb970 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.266556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.266565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.266573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 01:22:02.253 [2024-07-22 11:19:07.266585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.273193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 01:22:02.253 [2024-07-22 11:19:07.273276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:22:02.253 [2024-07-22 11:19:07.273293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190fe40 with addr=10.0.0.2, port=4420 01:22:02.253 [2024-07-22 11:19:07.273302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fe40 is same with the state(5) to be set 01:22:02.253 [2024-07-22 11:19:07.273315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190fe40 (9): Bad file descriptor 01:22:02.253 [2024-07-22 11:19:07.273327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:22:02.253 [2024-07-22 11:19:07.273334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 01:22:02.253 [2024-07-22 11:19:07.273342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:22:02.253 [2024-07-22 11:19:07.273354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:22:02.253 [2024-07-22 11:19:07.274557] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 01:22:02.254 [2024-07-22 11:19:07.274598] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:22:02.254 [2024-07-22 11:19:07.274617] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:22:02.254 [2024-07-22 11:19:07.274648] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 01:22:02.254 [2024-07-22 11:19:07.274661] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:22:02.254 [2024-07-22 11:19:07.274673] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:22:02.254 [2024-07-22 11:19:07.360642] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:22:02.254 [2024-07-22 11:19:07.360698] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:03.190 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:03.448 11:19:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 01:22:03.448 [2024-07-22 11:19:08.485217] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:04.388 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:04.661 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:04.661 [2024-07-22 11:19:09.654292] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 01:22:04.661 2024/07/22 11:19:09 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:22:04.661 request: 01:22:04.662 { 01:22:04.662 "method": "bdev_nvme_start_mdns_discovery", 01:22:04.662 "params": { 01:22:04.662 "name": "mdns", 01:22:04.662 "svcname": "_nvme-disc._http", 01:22:04.662 "hostnqn": "nqn.2021-12.io.spdk:test" 01:22:04.662 } 01:22:04.662 } 01:22:04.662 Got JSON-RPC error response 01:22:04.662 GoRPCClient: error on JSON-RPC call 01:22:04.662 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:22:04.662 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 01:22:04.662 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:22:04.662 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:22:04.662 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:22:04.662 11:19:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 01:22:05.227 [2024-07-22 11:19:10.242944] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:22:05.227 [2024-07-22 11:19:10.342943] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:22:05.485 [2024-07-22 11:19:10.442946] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:22:05.485 [2024-07-22 11:19:10.442987] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:22:05.485 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:22:05.485 cookie is 0 01:22:05.485 is_local: 1 01:22:05.485 our_own: 0 01:22:05.485 wide_area: 0 01:22:05.485 multicast: 1 01:22:05.485 cached: 1 01:22:05.485 [2024-07-22 11:19:10.542947] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:22:05.485 [2024-07-22 11:19:10.542975] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 01:22:05.485 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:22:05.485 cookie is 0 01:22:05.485 is_local: 1 01:22:05.485 our_own: 0 01:22:05.485 wide_area: 0 01:22:05.485 multicast: 1 01:22:05.485 cached: 1 01:22:05.485 [2024-07-22 11:19:10.543002] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:22:05.485 [2024-07-22 11:19:10.642948] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:22:05.485 [2024-07-22 11:19:10.642975] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:22:05.485 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:22:05.485 cookie is 0 01:22:05.485 is_local: 1 01:22:05.485 our_own: 0 01:22:05.485 wide_area: 0 01:22:05.485 multicast: 1 01:22:05.485 cached: 1 01:22:05.742 [2024-07-22 11:19:10.742947] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:22:05.742 [2024-07-22 11:19:10.742975] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 01:22:05.742 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:22:05.742 cookie is 0 01:22:05.742 is_local: 1 01:22:05.742 our_own: 0 01:22:05.742 wide_area: 0 01:22:05.742 multicast: 1 01:22:05.742 cached: 1 01:22:05.742 [2024-07-22 11:19:10.743001] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 01:22:06.305 [2024-07-22 11:19:11.453500] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:22:06.305 [2024-07-22 11:19:11.453522] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:22:06.305 [2024-07-22 11:19:11.453539] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:22:06.562 [2024-07-22 11:19:11.539613] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 01:22:06.562 [2024-07-22 11:19:11.599748] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:22:06.562 [2024-07-22 11:19:11.599776] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 01:22:06.562 [2024-07-22 11:19:11.653070] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 01:22:06.562 [2024-07-22 11:19:11.653092] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 01:22:06.562 [2024-07-22 11:19:11.653124] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 01:22:06.562 [2024-07-22 11:19:11.739203] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 01:22:06.819 [2024-07-22 11:19:11.798814] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 01:22:06.819 [2024-07-22 11:19:11.798840] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:10.100 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.101 [2024-07-22 11:19:14.857235] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 01:22:10.101 2024/07/22 11:19:14 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:22:10.101 request: 01:22:10.101 { 01:22:10.101 "method": "bdev_nvme_start_mdns_discovery", 01:22:10.101 "params": { 01:22:10.101 "name": "cdc", 01:22:10.101 "svcname": "_nvme-disc._tcp", 01:22:10.101 "hostnqn": "nqn.2021-12.io.spdk:test" 01:22:10.101 } 01:22:10.101 } 01:22:10.101 Got JSON-RPC error response 01:22:10.101 GoRPCClient: error on JSON-RPC call 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 01:22:10.101 11:19:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 113071 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 113071 01:22:10.101 [2024-07-22 11:19:15.096586] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 113100 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 01:22:10.101 Got SIGTERM, quitting. 01:22:10.101 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 01:22:10.101 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 01:22:10.101 avahi-daemon 0.8 exiting. 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:22:10.101 rmmod nvme_tcp 01:22:10.101 rmmod nvme_fabrics 01:22:10.101 rmmod nvme_keyring 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 113021 ']' 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 113021 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 113021 ']' 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 113021 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:22:10.101 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113021 01:22:10.359 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:22:10.359 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:22:10.359 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113021' 01:22:10.359 killing process with pid 113021 01:22:10.359 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 113021 01:22:10.359 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 113021 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:10.618 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:22:10.618 01:22:10.618 real 0m20.622s 01:22:10.618 user 0m40.358s 01:22:10.619 sys 0m1.977s 01:22:10.619 ************************************ 01:22:10.619 END TEST nvmf_mdns_discovery 01:22:10.619 ************************************ 01:22:10.619 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 01:22:10.619 11:19:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:22:10.619 11:19:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:22:10.619 11:19:15 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 01:22:10.619 11:19:15 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:22:10.619 11:19:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:22:10.619 11:19:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:22:10.619 11:19:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:22:10.619 ************************************ 01:22:10.619 START TEST nvmf_host_multipath 01:22:10.619 ************************************ 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:22:10.619 * Looking for test storage... 01:22:10.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:22:10.619 Cannot find device "nvmf_tgt_br" 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:22:10.619 Cannot find device "nvmf_tgt_br2" 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 01:22:10.619 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:22:10.878 Cannot find device "nvmf_tgt_br" 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:22:10.878 Cannot find device "nvmf_tgt_br2" 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:22:10.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:22:10.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:22:10.878 11:19:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:22:10.878 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:22:11.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:22:11.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 01:22:11.137 01:22:11.137 --- 10.0.0.2 ping statistics --- 01:22:11.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:11.137 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:22:11.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:22:11.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 01:22:11.137 01:22:11.137 --- 10.0.0.3 ping statistics --- 01:22:11.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:11.137 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:22:11.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:22:11.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:22:11.137 01:22:11.137 --- 10.0.0.1 ping statistics --- 01:22:11.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:11.137 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=113662 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 113662 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 113662 ']' 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:11.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:11.137 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:22:11.138 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:22:11.138 [2024-07-22 11:19:16.191621] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:22:11.138 [2024-07-22 11:19:16.191717] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:22:11.138 [2024-07-22 11:19:16.332434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:22:11.397 [2024-07-22 11:19:16.393433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:22:11.397 [2024-07-22 11:19:16.393480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:22:11.397 [2024-07-22 11:19:16.393490] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:22:11.397 [2024-07-22 11:19:16.393497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:22:11.397 [2024-07-22 11:19:16.393504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:22:11.397 [2024-07-22 11:19:16.393636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:22:11.397 [2024-07-22 11:19:16.393643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=113662 01:22:11.397 11:19:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:22:11.656 [2024-07-22 11:19:16.822419] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:22:11.656 11:19:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:22:11.915 Malloc0 01:22:11.915 11:19:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:22:12.173 11:19:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:22:12.432 11:19:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:22:12.691 [2024-07-22 11:19:17.807442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:22:12.691 11:19:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:22:12.949 [2024-07-22 11:19:18.003715] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=113745 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 113745 /var/tmp/bdevperf.sock 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 113745 ']' 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:22:12.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 01:22:12.949 11:19:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:22:13.883 11:19:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:22:13.883 11:19:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 01:22:13.883 11:19:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:22:14.142 11:19:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 01:22:14.400 Nvme0n1 01:22:14.400 11:19:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:22:15.006 Nvme0n1 01:22:15.006 11:19:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 01:22:15.006 11:19:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:22:15.939 11:19:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 01:22:15.939 11:19:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:22:16.197 11:19:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:22:16.455 11:19:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 01:22:16.455 11:19:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113834 01:22:16.455 11:19:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113662 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:22:16.455 11:19:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:23.014 Attaching 4 probes... 01:22:23.014 @path[10.0.0.2, 4421]: 16209 01:22:23.014 @path[10.0.0.2, 4421]: 16613 01:22:23.014 @path[10.0.0.2, 4421]: 15794 01:22:23.014 @path[10.0.0.2, 4421]: 16482 01:22:23.014 @path[10.0.0.2, 4421]: 18038 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113834 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:22:23.014 11:19:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:22:23.014 11:19:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 01:22:23.014 11:19:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113662 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:22:23.014 11:19:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113959 01:22:23.014 11:19:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:29.579 Attaching 4 probes... 01:22:29.579 @path[10.0.0.2, 4420]: 19607 01:22:29.579 @path[10.0.0.2, 4420]: 19901 01:22:29.579 @path[10.0.0.2, 4420]: 19622 01:22:29.579 @path[10.0.0.2, 4420]: 19664 01:22:29.579 @path[10.0.0.2, 4420]: 19854 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113959 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:22:29.579 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:22:29.836 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 01:22:29.836 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113662 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:22:29.836 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114090 01:22:29.836 11:19:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:22:36.428 11:19:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:22:36.428 11:19:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:36.428 Attaching 4 probes... 01:22:36.428 @path[10.0.0.2, 4421]: 11388 01:22:36.428 @path[10.0.0.2, 4421]: 15832 01:22:36.428 @path[10.0.0.2, 4421]: 16205 01:22:36.428 @path[10.0.0.2, 4421]: 16002 01:22:36.428 @path[10.0.0.2, 4421]: 15315 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114090 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 01:22:36.428 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 01:22:36.686 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 01:22:36.686 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114222 01:22:36.686 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113662 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:22:36.686 11:19:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:43.241 Attaching 4 probes... 01:22:43.241 01:22:43.241 01:22:43.241 01:22:43.241 01:22:43.241 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114222 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 01:22:43.241 11:19:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 01:22:43.241 11:19:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:22:43.241 11:19:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 01:22:43.241 11:19:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114351 01:22:43.241 11:19:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113662 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:22:43.241 11:19:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:49.804 Attaching 4 probes... 01:22:49.804 @path[10.0.0.2, 4421]: 19392 01:22:49.804 @path[10.0.0.2, 4421]: 19893 01:22:49.804 @path[10.0.0.2, 4421]: 19872 01:22:49.804 @path[10.0.0.2, 4421]: 19900 01:22:49.804 @path[10.0.0.2, 4421]: 19799 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114351 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:49.804 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:22:49.804 [2024-07-22 11:19:54.932212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.804 [2024-07-22 11:19:54.932587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.932983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.933008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.933017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.933026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.933035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.933043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 [2024-07-22 11:19:54.933051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2140 is same with the state(5) to be set 01:22:49.805 11:19:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 01:22:51.182 11:19:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 01:22:51.182 11:19:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114487 01:22:51.182 11:19:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113662 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:22:51.182 11:19:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:22:57.797 11:20:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:22:57.797 11:20:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:57.797 Attaching 4 probes... 01:22:57.797 @path[10.0.0.2, 4420]: 19101 01:22:57.797 @path[10.0.0.2, 4420]: 19888 01:22:57.797 @path[10.0.0.2, 4420]: 20001 01:22:57.797 @path[10.0.0.2, 4420]: 19906 01:22:57.797 @path[10.0.0.2, 4420]: 19769 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114487 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 01:22:57.797 [2024-07-22 11:20:02.453535] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 01:22:57.797 11:20:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 01:23:04.353 11:20:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 01:23:04.353 11:20:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114674 01:23:04.353 11:20:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113662 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:23:04.353 11:20:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:23:09.626 11:20:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:23:09.626 11:20:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:23:09.884 11:20:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:23:09.884 11:20:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:23:09.884 Attaching 4 probes... 01:23:09.884 @path[10.0.0.2, 4421]: 19014 01:23:09.884 @path[10.0.0.2, 4421]: 19500 01:23:09.884 @path[10.0.0.2, 4421]: 19718 01:23:09.884 @path[10.0.0.2, 4421]: 19331 01:23:09.884 @path[10.0.0.2, 4421]: 19547 01:23:09.884 11:20:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:23:09.884 11:20:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 01:23:09.884 11:20:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114674 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 113745 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 113745 ']' 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 113745 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113745 01:23:09.884 killing process with pid 113745 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113745' 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 113745 01:23:09.884 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 113745 01:23:10.150 Connection closed with partial response: 01:23:10.150 01:23:10.150 01:23:10.150 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 113745 01:23:10.150 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:23:10.150 [2024-07-22 11:19:18.084457] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:23:10.150 [2024-07-22 11:19:18.084558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113745 ] 01:23:10.150 [2024-07-22 11:19:18.226496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:10.150 [2024-07-22 11:19:18.309563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:23:10.150 Running I/O for 90 seconds... 01:23:10.150 [2024-07-22 11:19:28.124940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.150 [2024-07-22 11:19:28.125028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:23:10.150 [2024-07-22 11:19:28.125083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.150 [2024-07-22 11:19:28.125104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:23:10.150 [2024-07-22 11:19:28.125127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.150 [2024-07-22 11:19:28.125143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.125808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.125820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.151 [2024-07-22 11:19:28.127528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.127968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.127997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:23:10.151 [2024-07-22 11:19:28.128878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.151 [2024-07-22 11:19:28.128891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.128909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.128923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.128941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.128955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.129957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.129987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.152 [2024-07-22 11:19:28.130306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.152 [2024-07-22 11:19:28.130354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.152 [2024-07-22 11:19:28.130386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.152 [2024-07-22 11:19:28.130417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.152 [2024-07-22 11:19:28.130449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:23:10.152 [2024-07-22 11:19:28.130474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.153 [2024-07-22 11:19:28.130488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.130507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.153 [2024-07-22 11:19:28.130520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.153 [2024-07-22 11:19:28.131603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.131954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.131975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.153 [2024-07-22 11:19:28.131989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.132975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.132993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.133006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.133037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.133053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.133073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.133086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.133105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.153 [2024-07-22 11:19:28.133118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:23:10.153 [2024-07-22 11:19:28.133136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:28.133149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:28.133168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:28.133180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.624733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.624790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.624863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.624883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.624904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.624917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.624935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.624948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.625450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:34.625481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:34.625532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:34.625564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:34.625595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.625614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:34.625628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.154 [2024-07-22 11:19:34.626251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:23:10.154 [2024-07-22 11:19:34.626801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.154 [2024-07-22 11:19:34.626814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.626834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.626853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.626873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.626885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.626911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.626923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.626943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.626963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.627950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.627985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.628001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.628038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.628082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.628117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.628152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.155 [2024-07-22 11:19:34.628187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.155 [2024-07-22 11:19:34.628223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.155 [2024-07-22 11:19:34.628258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.155 [2024-07-22 11:19:34.628292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:23:10.155 [2024-07-22 11:19:34.628314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.156 [2024-07-22 11:19:34.628724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.628758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.628793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.628827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.628862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.628901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.628949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.628969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.629951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.629995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.630028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.156 [2024-07-22 11:19:34.630042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:23:10.156 [2024-07-22 11:19:34.630074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:34.630543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:34.630564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.629724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.629780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.629846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.629865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.629885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.629898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.629917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.629929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.629947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.629959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.630041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.630074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.630107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.157 [2024-07-22 11:19:41.630760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.630798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.630832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.630888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.630923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.630943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.630957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.157 [2024-07-22 11:19:41.631725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:23:10.157 [2024-07-22 11:19:41.631761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.631775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.631796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.631810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.631846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.631859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.631880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.631893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.631914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.631927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.631948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.631961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.631983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.631998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.632803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.632817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.634972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.634999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.158 [2024-07-22 11:19:41.635026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:23:10.158 [2024-07-22 11:19:41.635055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:41.635851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:41.635866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.933799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.933839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.933863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.933877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.933890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.933904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.933917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.933929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.933954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.159 [2024-07-22 11:19:54.934468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.159 [2024-07-22 11:19:54.934481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.934960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.934989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.160 [2024-07-22 11:19:54.935757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.160 [2024-07-22 11:19:54.935770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.935785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.935798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.935842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.935861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.935874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.935889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.935902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.935917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.935930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.935945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.935970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.935988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.161 [2024-07-22 11:19:54.936260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.936957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.936986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.937017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.161 [2024-07-22 11:19:54.937041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.161 [2024-07-22 11:19:54.937059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.162 [2024-07-22 11:19:54.937073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.162 [2024-07-22 11:19:54.937101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.162 [2024-07-22 11:19:54.937129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.162 [2024-07-22 11:19:54.937157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:10.162 [2024-07-22 11:19:54.937185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:10.162 [2024-07-22 11:19:54.937700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160eae0 is same with the state(5) to be set 01:23:10.162 [2024-07-22 11:19:54.937727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:10.162 [2024-07-22 11:19:54.937736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:10.162 [2024-07-22 11:19:54.937750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112288 len:8 PRP1 0x0 PRP2 0x0 01:23:10.162 [2024-07-22 11:19:54.937762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937815] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x160eae0 was disconnected and freed. reset controller. 01:23:10.162 [2024-07-22 11:19:54.937895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:23:10.162 [2024-07-22 11:19:54.937918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:23:10.162 [2024-07-22 11:19:54.937943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.937955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:23:10.162 [2024-07-22 11:19:54.937983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.938012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:23:10.162 [2024-07-22 11:19:54.938024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:10.162 [2024-07-22 11:19:54.938054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1614510 is same with the state(5) to be set 01:23:10.162 [2024-07-22 11:19:54.939314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:10.162 [2024-07-22 11:19:54.939379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1614510 (9): Bad file descriptor 01:23:10.162 [2024-07-22 11:19:54.939517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:10.162 [2024-07-22 11:19:54.939547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1614510 with addr=10.0.0.2, port=4421 01:23:10.162 [2024-07-22 11:19:54.939562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1614510 is same with the state(5) to be set 01:23:10.162 [2024-07-22 11:19:54.939587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1614510 (9): Bad file descriptor 01:23:10.162 [2024-07-22 11:19:54.939607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:10.162 [2024-07-22 11:19:54.939620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:10.162 [2024-07-22 11:19:54.939634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:10.162 [2024-07-22 11:19:54.939685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:10.162 [2024-07-22 11:19:54.939701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:10.162 [2024-07-22 11:20:05.022422] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:23:10.162 Received shutdown signal, test time was about 55.045001 seconds 01:23:10.162 01:23:10.162 Latency(us) 01:23:10.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:10.162 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:23:10.162 Verification LBA range: start 0x0 length 0x4000 01:23:10.162 Nvme0n1 : 55.04 7990.15 31.21 0.00 0.00 15994.58 603.23 7046430.72 01:23:10.162 =================================================================================================================== 01:23:10.162 Total : 7990.15 31.21 0.00 0.00 15994.58 603.23 7046430.72 01:23:10.162 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:23:10.421 rmmod nvme_tcp 01:23:10.421 rmmod nvme_fabrics 01:23:10.421 rmmod nvme_keyring 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 113662 ']' 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 113662 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 113662 ']' 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 113662 01:23:10.421 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113662 01:23:10.679 killing process with pid 113662 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113662' 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 113662 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 113662 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:23:10.679 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:23:10.938 11:20:15 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:23:10.938 01:23:10.938 real 1m0.228s 01:23:10.938 user 2m49.833s 01:23:10.938 sys 0m14.008s 01:23:10.938 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 01:23:10.938 ************************************ 01:23:10.938 END TEST nvmf_host_multipath 01:23:10.938 ************************************ 01:23:10.938 11:20:15 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:23:10.938 11:20:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:23:10.938 11:20:15 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:23:10.938 11:20:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:23:10.938 11:20:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 01:23:10.938 11:20:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:23:10.938 ************************************ 01:23:10.938 START TEST nvmf_timeout 01:23:10.938 ************************************ 01:23:10.938 11:20:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:23:10.938 * Looking for test storage... 01:23:10.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:10.938 11:20:16 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:23:10.939 Cannot find device "nvmf_tgt_br" 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:23:10.939 Cannot find device "nvmf_tgt_br2" 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:23:10.939 Cannot find device "nvmf_tgt_br" 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:23:10.939 Cannot find device "nvmf_tgt_br2" 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 01:23:10.939 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:23:11.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:23:11.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:23:11.196 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:23:11.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:23:11.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 01:23:11.197 01:23:11.197 --- 10.0.0.2 ping statistics --- 01:23:11.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:23:11.197 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:23:11.197 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:23:11.197 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 01:23:11.197 01:23:11.197 --- 10.0.0.3 ping statistics --- 01:23:11.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:23:11.197 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:23:11.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:23:11.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 01:23:11.197 01:23:11.197 --- 10.0.0.1 ping statistics --- 01:23:11.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:23:11.197 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=114993 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 114993 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 114993 ']' 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:11.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:23:11.197 11:20:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:23:11.454 [2024-07-22 11:20:16.452771] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:23:11.454 [2024-07-22 11:20:16.452874] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:11.454 [2024-07-22 11:20:16.593758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:23:11.454 [2024-07-22 11:20:16.651148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:11.454 [2024-07-22 11:20:16.651207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:11.454 [2024-07-22 11:20:16.651232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:11.454 [2024-07-22 11:20:16.651240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:11.454 [2024-07-22 11:20:16.651246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:11.454 [2024-07-22 11:20:16.651387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:23:11.454 [2024-07-22 11:20:16.651396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:23:12.387 11:20:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:23:12.645 [2024-07-22 11:20:17.612547] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:12.645 11:20:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:23:12.902 Malloc0 01:23:12.902 11:20:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:23:13.159 11:20:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:23:13.415 11:20:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:13.672 [2024-07-22 11:20:18.643065] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:23:13.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=115080 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 115080 /var/tmp/bdevperf.sock 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115080 ']' 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:23:13.672 11:20:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:23:13.672 [2024-07-22 11:20:18.715597] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:23:13.672 [2024-07-22 11:20:18.715742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115080 ] 01:23:13.672 [2024-07-22 11:20:18.857206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:13.929 [2024-07-22 11:20:18.943790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:23:14.493 11:20:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:23:14.493 11:20:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:23:14.493 11:20:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:23:14.751 11:20:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:23:15.316 NVMe0n1 01:23:15.316 11:20:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:15.316 11:20:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=115126 01:23:15.316 11:20:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 01:23:15.316 Running I/O for 10 seconds... 01:23:16.249 11:20:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:16.509 [2024-07-22 11:20:21.541505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.509 [2024-07-22 11:20:21.541678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.541995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4520 is same with the state(5) to be set 01:23:16.510 [2024-07-22 11:20:21.542646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.542873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.542882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.510 [2024-07-22 11:20:21.543821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.510 [2024-07-22 11:20:21.543830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.543842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.543851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.543862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.543871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.544971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.544992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.545860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.545870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.546942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.546952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.547251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.547407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.547677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.547803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.547825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.547836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.548056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.548077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.548089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.548099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.548110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.548120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.548131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.548140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.548151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.548161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.548362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.511 [2024-07-22 11:20:21.548382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.511 [2024-07-22 11:20:21.548395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.512 [2024-07-22 11:20:21.548404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.512 [2024-07-22 11:20:21.548426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.512 [2024-07-22 11:20:21.548446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.548561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.548592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.548748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.548867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.548888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.548909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.548920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.549951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.549980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.550880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.550891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.551635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.551786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:16.512 [2024-07-22 11:20:21.552029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.552047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.512 [2024-07-22 11:20:21.552057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.552180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.512 [2024-07-22 11:20:21.552200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.552330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.512 [2024-07-22 11:20:21.552350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.552487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.512 [2024-07-22 11:20:21.552606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.512 [2024-07-22 11:20:21.552627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.552638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.552871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.552891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.552903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.552912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.552924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.552934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.552945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.552954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.553094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.553241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.553528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.553679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.553816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.553919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.553940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.553951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.554722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.554865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.555000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.555114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.555134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.555145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.555156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:16.513 [2024-07-22 11:20:21.555407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.555566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:16.513 [2024-07-22 11:20:21.555715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98544 len:8 PRP1 0x0 PRP2 0x0 01:23:16.513 [2024-07-22 11:20:21.555816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.555835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:16.513 [2024-07-22 11:20:21.555843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:16.513 [2024-07-22 11:20:21.555852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98552 len:8 PRP1 0x0 PRP2 0x0 01:23:16.513 [2024-07-22 11:20:21.555861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.555871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:16.513 [2024-07-22 11:20:21.555878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:16.513 [2024-07-22 11:20:21.555886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98560 len:8 PRP1 0x0 PRP2 0x0 01:23:16.513 [2024-07-22 11:20:21.556003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.556026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:16.513 [2024-07-22 11:20:21.556256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:16.513 [2024-07-22 11:20:21.556274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 01:23:16.513 [2024-07-22 11:20:21.556284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.556545] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c2be90 was disconnected and freed. reset controller. 01:23:16.513 [2024-07-22 11:20:21.556754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:23:16.513 [2024-07-22 11:20:21.556778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.556789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:23:16.513 [2024-07-22 11:20:21.556798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.556809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:23:16.513 [2024-07-22 11:20:21.556817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.556827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:23:16.513 [2024-07-22 11:20:21.557056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:16.513 [2024-07-22 11:20:21.557079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d690 is same with the state(5) to be set 01:23:16.513 [2024-07-22 11:20:21.557526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:16.513 [2024-07-22 11:20:21.557560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0d690 (9): Bad file descriptor 01:23:16.513 [2024-07-22 11:20:21.557864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:16.513 [2024-07-22 11:20:21.557899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0d690 with addr=10.0.0.2, port=4420 01:23:16.513 [2024-07-22 11:20:21.557912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d690 is same with the state(5) to be set 01:23:16.513 [2024-07-22 11:20:21.557934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0d690 (9): Bad file descriptor 01:23:16.513 [2024-07-22 11:20:21.557951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:16.513 [2024-07-22 11:20:21.558091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:16.513 [2024-07-22 11:20:21.558105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:16.513 [2024-07-22 11:20:21.558363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:16.513 [2024-07-22 11:20:21.558378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:16.513 11:20:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 01:23:18.418 [2024-07-22 11:20:23.558474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:18.418 [2024-07-22 11:20:23.558550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0d690 with addr=10.0.0.2, port=4420 01:23:18.418 [2024-07-22 11:20:23.558564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d690 is same with the state(5) to be set 01:23:18.418 [2024-07-22 11:20:23.558586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0d690 (9): Bad file descriptor 01:23:18.418 [2024-07-22 11:20:23.558602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:18.418 [2024-07-22 11:20:23.558611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:18.418 [2024-07-22 11:20:23.558621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:18.418 [2024-07-22 11:20:23.558644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:18.418 [2024-07-22 11:20:23.558654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:18.418 11:20:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 01:23:18.418 11:20:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:23:18.418 11:20:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:23:18.675 11:20:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 01:23:18.675 11:20:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 01:23:18.675 11:20:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:23:18.675 11:20:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:23:18.933 11:20:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 01:23:18.933 11:20:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 01:23:20.831 [2024-07-22 11:20:25.558772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:20.831 [2024-07-22 11:20:25.558853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0d690 with addr=10.0.0.2, port=4420 01:23:20.831 [2024-07-22 11:20:25.558868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d690 is same with the state(5) to be set 01:23:20.831 [2024-07-22 11:20:25.558892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0d690 (9): Bad file descriptor 01:23:20.831 [2024-07-22 11:20:25.558911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:20.831 [2024-07-22 11:20:25.558920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:20.831 [2024-07-22 11:20:25.558931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:20.831 [2024-07-22 11:20:25.558956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:20.831 [2024-07-22 11:20:25.558983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:22.763 [2024-07-22 11:20:27.559063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:22.763 [2024-07-22 11:20:27.559119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:22.763 [2024-07-22 11:20:27.559146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:22.763 [2024-07-22 11:20:27.559156] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 01:23:22.763 [2024-07-22 11:20:27.559180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:23.698 01:23:23.698 Latency(us) 01:23:23.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:23.698 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:23:23.698 Verification LBA range: start 0x0 length 0x4000 01:23:23.698 NVMe0n1 : 8.19 1492.49 5.83 15.62 0.00 84938.99 1884.16 7046430.72 01:23:23.698 =================================================================================================================== 01:23:23.698 Total : 1492.49 5.83 15.62 0.00 84938.99 1884.16 7046430.72 01:23:23.698 0 01:23:23.956 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 01:23:23.956 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:23:23.956 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:23:24.215 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 01:23:24.215 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 01:23:24.215 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:23:24.215 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 115126 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 115080 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115080 ']' 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115080 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115080 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115080' 01:23:24.474 killing process with pid 115080 01:23:24.474 Received shutdown signal, test time was about 9.258888 seconds 01:23:24.474 01:23:24.474 Latency(us) 01:23:24.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:24.474 =================================================================================================================== 01:23:24.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115080 01:23:24.474 11:20:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115080 01:23:24.733 11:20:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:24.992 [2024-07-22 11:20:30.002471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:23:24.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=115280 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 115280 /var/tmp/bdevperf.sock 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115280 ']' 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:23:24.992 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:23:24.992 [2024-07-22 11:20:30.060352] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:23:24.992 [2024-07-22 11:20:30.060415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115280 ] 01:23:24.992 [2024-07-22 11:20:30.197906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:25.250 [2024-07-22 11:20:30.265755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:23:25.816 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:23:25.816 11:20:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:23:25.816 11:20:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:23:26.074 11:20:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 01:23:26.330 NVMe0n1 01:23:26.587 11:20:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=115322 01:23:26.587 11:20:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 01:23:26.587 11:20:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:26.587 Running I/O for 10 seconds... 01:23:27.518 11:20:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:27.778 [2024-07-22 11:20:32.805605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.805882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465d10 is same with the state(5) to be set 01:23:27.778 [2024-07-22 11:20:32.808482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.778 [2024-07-22 11:20:32.808702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.778 [2024-07-22 11:20:32.808712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.808720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.808731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.808740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.808751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.808759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.808769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.809790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.809800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:27.779 [2024-07-22 11:20:32.810737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.810761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.810782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.810793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.810802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.811714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.811726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.812039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.812060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.812070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.812081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.812090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.812102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.812111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.779 [2024-07-22 11:20:32.812122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.779 [2024-07-22 11:20:32.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.812908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.812918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:27.780 [2024-07-22 11:20:32.813619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.813660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.813810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.813931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.813939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.813949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.813987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.813995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.814003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.814012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.814021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.814028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.814036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.814045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.814054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.814062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.814069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.814078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.814087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.814094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.814101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.814110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.814118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.814125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.814235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.814250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.814261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.814268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.814276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 01:23:27.780 [2024-07-22 11:20:32.814285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.780 [2024-07-22 11:20:32.814294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.780 [2024-07-22 11:20:32.814301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.780 [2024-07-22 11:20:32.814403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.814941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.814949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.814966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.814975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96824 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.815942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.815952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.815971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.815979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.816075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.816091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.816116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.816139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.816148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.816164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.816252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.816267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.816276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.816286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.816293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.816302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.816310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.816609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.816671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.816680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.816690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.816700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.816707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.816714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.816722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.816731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.816738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.816745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96880 len:8 PRP1 0x0 PRP2 0x0 01:23:27.781 [2024-07-22 11:20:32.816758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.781 [2024-07-22 11:20:32.816767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.781 [2024-07-22 11:20:32.816774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.781 [2024-07-22 11:20:32.816781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.816790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.816798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.816805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.816812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.816820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.816838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.816849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.816946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96184 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.817930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.817937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.817954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.818058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.818068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.818077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.818095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.818107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.818116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.818227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.818235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.818244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:27.782 [2024-07-22 11:20:32.818260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:27.782 [2024-07-22 11:20:32.818268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 01:23:27.782 [2024-07-22 11:20:32.818276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818422] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14a3e90 was disconnected and freed. reset controller. 01:23:27.782 [2024-07-22 11:20:32.818707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:23:27.782 [2024-07-22 11:20:32.818730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:23:27.782 [2024-07-22 11:20:32.818822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:23:27.782 [2024-07-22 11:20:32.818839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:23:27.782 [2024-07-22 11:20:32.818857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:27.782 [2024-07-22 11:20:32.818871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:27.782 [2024-07-22 11:20:32.819442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:27.782 [2024-07-22 11:20:32.819487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:27.782 [2024-07-22 11:20:32.819585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:27.782 [2024-07-22 11:20:32.819834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1485690 with addr=10.0.0.2, port=4420 01:23:27.782 [2024-07-22 11:20:32.819849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:27.783 [2024-07-22 11:20:32.819869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:27.783 [2024-07-22 11:20:32.819885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:27.783 [2024-07-22 11:20:32.819895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:27.783 [2024-07-22 11:20:32.819906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:27.783 [2024-07-22 11:20:32.820273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:27.783 [2024-07-22 11:20:32.820300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:27.783 11:20:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 01:23:28.714 [2024-07-22 11:20:33.820421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:28.714 [2024-07-22 11:20:33.820480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1485690 with addr=10.0.0.2, port=4420 01:23:28.714 [2024-07-22 11:20:33.820493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:28.714 [2024-07-22 11:20:33.820514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:28.714 [2024-07-22 11:20:33.820531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:28.714 [2024-07-22 11:20:33.820540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:28.714 [2024-07-22 11:20:33.820550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:28.714 [2024-07-22 11:20:33.820571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:28.714 [2024-07-22 11:20:33.820582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:28.714 11:20:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:28.973 [2024-07-22 11:20:34.075087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:23:28.973 11:20:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 115322 01:23:29.907 [2024-07-22 11:20:34.837923] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:23:36.482 01:23:36.482 Latency(us) 01:23:36.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:36.482 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:23:36.482 Verification LBA range: start 0x0 length 0x4000 01:23:36.482 NVMe0n1 : 10.00 7290.95 28.48 0.00 0.00 17526.47 1645.85 3035150.89 01:23:36.482 =================================================================================================================== 01:23:36.482 Total : 7290.95 28.48 0.00 0.00 17526.47 1645.85 3035150.89 01:23:36.482 0 01:23:36.482 11:20:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=115438 01:23:36.482 11:20:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:36.482 11:20:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 01:23:36.740 Running I/O for 10 seconds... 01:23:37.672 11:20:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:37.932 [2024-07-22 11:20:42.908679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.908957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.908983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.932 [2024-07-22 11:20:42.909225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.932 [2024-07-22 11:20:42.909236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.909653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.909662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.910124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.910148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.910168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.910188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.933 [2024-07-22 11:20:42.910209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.933 [2024-07-22 11:20:42.910618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.933 [2024-07-22 11:20:42.910627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.910950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.910960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.934 [2024-07-22 11:20:42.910984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.934 [2024-07-22 11:20:42.911729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.934 [2024-07-22 11:20:42.911739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.911750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.911759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.911770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.911779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.911790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.911868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.911885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.911895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.911906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.911916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.911927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.911936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.911947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.912192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.912214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.912352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.912629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.912660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.912778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.912800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.912809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.913066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.913856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.913867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:23:37.935 [2024-07-22 11:20:42.914028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.935 [2024-07-22 11:20:42.914923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.935 [2024-07-22 11:20:42.914932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.914943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.936 [2024-07-22 11:20:42.914952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.914991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.936 [2024-07-22 11:20:42.915251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.915273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.936 [2024-07-22 11:20:42.915371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.915402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.936 [2024-07-22 11:20:42.915411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.915422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.936 [2024-07-22 11:20:42.915431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.915442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.936 [2024-07-22 11:20:42.915451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.915661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:37.936 [2024-07-22 11:20:42.915699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.915712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7ac0 is same with the state(5) to be set 01:23:37.936 [2024-07-22 11:20:42.915725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:37.936 [2024-07-22 11:20:42.915733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:37.936 [2024-07-22 11:20:42.915741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73432 len:8 PRP1 0x0 PRP2 0x0 01:23:37.936 [2024-07-22 11:20:42.915750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.915803] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14a7ac0 was disconnected and freed. reset controller. 01:23:37.936 [2024-07-22 11:20:42.916357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.936 [2024-07-22 11:20:42.916397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.916409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.936 [2024-07-22 11:20:42.916418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.916427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.936 [2024-07-22 11:20:42.916436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.916446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:23:37.936 [2024-07-22 11:20:42.916454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:37.936 [2024-07-22 11:20:42.916463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:37.936 [2024-07-22 11:20:42.916900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:37.936 [2024-07-22 11:20:42.916927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:37.936 [2024-07-22 11:20:42.917090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:37.936 [2024-07-22 11:20:42.917175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1485690 with addr=10.0.0.2, port=4420 01:23:37.936 [2024-07-22 11:20:42.917186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:37.936 [2024-07-22 11:20:42.917206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:37.936 [2024-07-22 11:20:42.917221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:37.936 [2024-07-22 11:20:42.917453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:37.936 [2024-07-22 11:20:42.917474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:37.936 [2024-07-22 11:20:42.917496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:37.936 [2024-07-22 11:20:42.917507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:37.936 11:20:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 01:23:38.870 [2024-07-22 11:20:43.917596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:38.870 [2024-07-22 11:20:43.917653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1485690 with addr=10.0.0.2, port=4420 01:23:38.870 [2024-07-22 11:20:43.917667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:38.870 [2024-07-22 11:20:43.917685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:38.870 [2024-07-22 11:20:43.917700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:38.870 [2024-07-22 11:20:43.917709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:38.870 [2024-07-22 11:20:43.917718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:38.870 [2024-07-22 11:20:43.917737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:38.870 [2024-07-22 11:20:43.917747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:39.804 [2024-07-22 11:20:44.917832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:39.804 [2024-07-22 11:20:44.917889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1485690 with addr=10.0.0.2, port=4420 01:23:39.804 [2024-07-22 11:20:44.917901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:39.804 [2024-07-22 11:20:44.917920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:39.804 [2024-07-22 11:20:44.917934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:39.804 [2024-07-22 11:20:44.917943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:39.804 [2024-07-22 11:20:44.917953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:39.804 [2024-07-22 11:20:44.917984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:39.804 [2024-07-22 11:20:44.917995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:40.737 [2024-07-22 11:20:45.920848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:40.737 [2024-07-22 11:20:45.920922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1485690 with addr=10.0.0.2, port=4420 01:23:40.737 [2024-07-22 11:20:45.920936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(5) to be set 01:23:40.737 [2024-07-22 11:20:45.921383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485690 (9): Bad file descriptor 01:23:40.737 [2024-07-22 11:20:45.921816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:40.737 [2024-07-22 11:20:45.921845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:40.737 [2024-07-22 11:20:45.921856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:40.737 [2024-07-22 11:20:45.925697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:40.737 [2024-07-22 11:20:45.925742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:40.737 11:20:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:40.994 [2024-07-22 11:20:46.174091] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:23:40.994 11:20:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 115438 01:23:41.928 [2024-07-22 11:20:46.957436] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 01:23:47.187 01:23:47.187 Latency(us) 01:23:47.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:47.187 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:23:47.187 Verification LBA range: start 0x0 length 0x4000 01:23:47.187 NVMe0n1 : 10.01 6082.33 23.76 4103.19 0.00 12538.29 744.73 3019898.88 01:23:47.187 =================================================================================================================== 01:23:47.187 Total : 6082.33 23.76 4103.19 0.00 12538.29 0.00 3019898.88 01:23:47.187 0 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 115280 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115280 ']' 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115280 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115280 01:23:47.187 killing process with pid 115280 01:23:47.187 Received shutdown signal, test time was about 10.000000 seconds 01:23:47.187 01:23:47.187 Latency(us) 01:23:47.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:47.187 =================================================================================================================== 01:23:47.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115280' 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115280 01:23:47.187 11:20:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115280 01:23:47.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=115559 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 115559 /var/tmp/bdevperf.sock 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115559 ']' 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 01:23:47.187 11:20:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:23:47.187 [2024-07-22 11:20:52.093819] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:23:47.188 [2024-07-22 11:20:52.093920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115559 ] 01:23:47.188 [2024-07-22 11:20:52.233347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:47.188 [2024-07-22 11:20:52.302855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:23:48.123 11:20:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:23:48.123 11:20:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 01:23:48.123 11:20:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 115559 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 01:23:48.123 11:20:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=115583 01:23:48.123 11:20:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 01:23:48.123 11:20:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:23:48.380 NVMe0n1 01:23:48.380 11:20:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=115642 01:23:48.380 11:20:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:48.380 11:20:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 01:23:48.638 Running I/O for 10 seconds... 01:23:49.571 11:20:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:23:49.832 [2024-07-22 11:20:54.832581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.832830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf7a0 is same with the state(5) to be set 01:23:49.832 [2024-07-22 11:20:54.833092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.832 [2024-07-22 11:20:54.833528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.832 [2024-07-22 11:20:54.833535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.833961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.833985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.833 [2024-07-22 11:20:54.834413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.833 [2024-07-22 11:20:54.834421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.834982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.834991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.834 [2024-07-22 11:20:54.835296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.834 [2024-07-22 11:20:54.835306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:23:49.835 [2024-07-22 11:20:54.835438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8de90 is same with the state(5) to be set 01:23:49.835 [2024-07-22 11:20:54.835457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:23:49.835 [2024-07-22 11:20:54.835463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:23:49.835 [2024-07-22 11:20:54.835470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96552 len:8 PRP1 0x0 PRP2 0x0 01:23:49.835 [2024-07-22 11:20:54.835478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:23:49.835 [2024-07-22 11:20:54.835531] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d8de90 was disconnected and freed. reset controller. 01:23:49.835 [2024-07-22 11:20:54.835813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:49.835 [2024-07-22 11:20:54.835918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6f690 (9): Bad file descriptor 01:23:49.835 [2024-07-22 11:20:54.836097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:49.835 [2024-07-22 11:20:54.836133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6f690 with addr=10.0.0.2, port=4420 01:23:49.835 [2024-07-22 11:20:54.836143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6f690 is same with the state(5) to be set 01:23:49.835 [2024-07-22 11:20:54.836160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6f690 (9): Bad file descriptor 01:23:49.835 [2024-07-22 11:20:54.836174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:49.835 [2024-07-22 11:20:54.836183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:49.835 [2024-07-22 11:20:54.836192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:49.835 [2024-07-22 11:20:54.836211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:49.835 [2024-07-22 11:20:54.836221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:49.835 11:20:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 115642 01:23:51.782 [2024-07-22 11:20:56.836369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:51.782 [2024-07-22 11:20:56.836428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6f690 with addr=10.0.0.2, port=4420 01:23:51.782 [2024-07-22 11:20:56.836442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6f690 is same with the state(5) to be set 01:23:51.782 [2024-07-22 11:20:56.836464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6f690 (9): Bad file descriptor 01:23:51.782 [2024-07-22 11:20:56.836491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:51.782 [2024-07-22 11:20:56.836501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:51.782 [2024-07-22 11:20:56.836511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:51.782 [2024-07-22 11:20:56.836535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:51.782 [2024-07-22 11:20:56.836545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:53.681 [2024-07-22 11:20:58.836691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:23:53.681 [2024-07-22 11:20:58.836748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6f690 with addr=10.0.0.2, port=4420 01:23:53.681 [2024-07-22 11:20:58.836763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6f690 is same with the state(5) to be set 01:23:53.681 [2024-07-22 11:20:58.836785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6f690 (9): Bad file descriptor 01:23:53.681 [2024-07-22 11:20:58.836801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:53.681 [2024-07-22 11:20:58.836810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:53.681 [2024-07-22 11:20:58.836820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:53.681 [2024-07-22 11:20:58.836843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:53.681 [2024-07-22 11:20:58.836853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 01:23:56.211 [2024-07-22 11:21:00.836917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 01:23:56.211 [2024-07-22 11:21:00.836962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 01:23:56.211 [2024-07-22 11:21:00.836980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 01:23:56.211 [2024-07-22 11:21:00.836990] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 01:23:56.211 [2024-07-22 11:21:00.837012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 01:23:56.778 01:23:56.778 Latency(us) 01:23:56.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:56.778 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 01:23:56.778 NVMe0n1 : 8.14 2771.33 10.83 15.72 0.00 45884.39 1995.87 7015926.69 01:23:56.778 =================================================================================================================== 01:23:56.778 Total : 2771.33 10.83 15.72 0.00 45884.39 1995.87 7015926.69 01:23:56.778 0 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:23:56.778 Attaching 5 probes... 01:23:56.778 1232.060096: reset bdev controller NVMe0 01:23:56.778 1232.217518: reconnect bdev controller NVMe0 01:23:56.778 3232.505972: reconnect delay bdev controller NVMe0 01:23:56.778 3232.539582: reconnect bdev controller NVMe0 01:23:56.778 5232.834582: reconnect delay bdev controller NVMe0 01:23:56.778 5232.867902: reconnect bdev controller NVMe0 01:23:56.778 7233.143834: reconnect delay bdev controller NVMe0 01:23:56.778 7233.175254: reconnect bdev controller NVMe0 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 115583 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 115559 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115559 ']' 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115559 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115559 01:23:56.778 killing process with pid 115559 01:23:56.778 Received shutdown signal, test time was about 8.205946 seconds 01:23:56.778 01:23:56.778 Latency(us) 01:23:56.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:56.778 =================================================================================================================== 01:23:56.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115559' 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115559 01:23:56.778 11:21:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115559 01:23:57.037 11:21:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:23:57.296 rmmod nvme_tcp 01:23:57.296 rmmod nvme_fabrics 01:23:57.296 rmmod nvme_keyring 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 114993 ']' 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 114993 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 114993 ']' 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 114993 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:23:57.296 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114993 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:23:57.554 killing process with pid 114993 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114993' 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 114993 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 114993 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:23:57.554 01:23:57.554 real 0m46.804s 01:23:57.554 user 2m17.797s 01:23:57.554 sys 0m4.895s 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 01:23:57.554 11:21:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:23:57.554 ************************************ 01:23:57.554 END TEST nvmf_timeout 01:23:57.554 ************************************ 01:23:57.813 11:21:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 01:23:57.813 11:21:02 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 01:23:57.813 11:21:02 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 01:23:57.813 11:21:02 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:23:57.813 11:21:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:23:57.813 11:21:02 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 01:23:57.813 01:23:57.813 real 21m37.904s 01:23:57.813 user 64m25.974s 01:23:57.813 sys 4m37.314s 01:23:57.813 11:21:02 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:23:57.813 ************************************ 01:23:57.813 11:21:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:23:57.813 END TEST nvmf_tcp 01:23:57.813 ************************************ 01:23:57.813 11:21:02 -- common/autotest_common.sh@1142 -- # return 0 01:23:57.813 11:21:02 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 01:23:57.813 11:21:02 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:23:57.813 11:21:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:23:57.813 11:21:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:23:57.813 11:21:02 -- common/autotest_common.sh@10 -- # set +x 01:23:57.813 ************************************ 01:23:57.813 START TEST spdkcli_nvmf_tcp 01:23:57.813 ************************************ 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:23:57.813 * Looking for test storage... 01:23:57.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 01:23:57.813 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=115853 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 115853 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 115853 ']' 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 01:23:57.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 01:23:57.814 11:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:23:58.072 [2024-07-22 11:21:03.042723] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:23:58.072 [2024-07-22 11:21:03.042810] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115853 ] 01:23:58.072 [2024-07-22 11:21:03.170752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 01:23:58.072 [2024-07-22 11:21:03.237020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:23:58.072 [2024-07-22 11:21:03.237022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:23:59.005 11:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:23:59.005 11:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 01:23:59.005 11:21:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:23:59.005 11:21:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 01:23:59.005 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 01:23:59.005 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 01:23:59.005 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 01:23:59.005 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 01:23:59.005 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 01:23:59.005 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 01:23:59.005 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:23:59.005 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:23:59.005 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 01:23:59.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 01:23:59.005 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 01:23:59.005 ' 01:24:01.546 [2024-07-22 11:21:06.693963] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:02.919 [2024-07-22 11:21:07.958876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 01:24:05.448 [2024-07-22 11:21:10.316388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 01:24:07.346 [2024-07-22 11:21:12.349545] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 01:24:08.718 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 01:24:08.718 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 01:24:08.718 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 01:24:08.718 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 01:24:08.718 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 01:24:08.718 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 01:24:08.718 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 01:24:08.718 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:24:08.718 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:24:08.718 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 01:24:08.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 01:24:08.718 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 01:24:08.976 11:21:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:24:09.542 11:21:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 01:24:09.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 01:24:09.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:24:09.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 01:24:09.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 01:24:09.542 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 01:24:09.542 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 01:24:09.542 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:24:09.542 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 01:24:09.542 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 01:24:09.542 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 01:24:09.542 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 01:24:09.542 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 01:24:09.542 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 01:24:09.542 ' 01:24:14.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 01:24:14.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 01:24:14.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 01:24:14.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 01:24:14.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 01:24:14.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 01:24:14.821 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 01:24:14.821 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 01:24:14.821 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 01:24:14.821 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 01:24:14.821 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 01:24:14.821 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 01:24:14.821 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 01:24:14.821 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 01:24:14.821 11:21:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 01:24:14.821 11:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 01:24:14.821 11:21:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 115853 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 115853 ']' 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 115853 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115853 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115853' 01:24:15.079 killing process with pid 115853 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 115853 01:24:15.079 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 115853 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 115853 ']' 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 115853 01:24:15.336 Process with pid 115853 is not found 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 115853 ']' 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 115853 01:24:15.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (115853) - No such process 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 115853 is not found' 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 01:24:15.336 ************************************ 01:24:15.336 END TEST spdkcli_nvmf_tcp 01:24:15.336 ************************************ 01:24:15.336 01:24:15.336 real 0m17.491s 01:24:15.336 user 0m37.745s 01:24:15.336 sys 0m0.947s 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 01:24:15.336 11:21:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:24:15.336 11:21:20 -- common/autotest_common.sh@1142 -- # return 0 01:24:15.336 11:21:20 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:24:15.336 11:21:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 01:24:15.336 11:21:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:24:15.336 11:21:20 -- common/autotest_common.sh@10 -- # set +x 01:24:15.336 ************************************ 01:24:15.336 START TEST nvmf_identify_passthru 01:24:15.336 ************************************ 01:24:15.336 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:24:15.336 * Looking for test storage... 01:24:15.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:24:15.336 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:15.336 11:21:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:15.336 11:21:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:15.336 11:21:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 01:24:15.336 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:15.336 11:21:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:15.336 11:21:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:15.336 11:21:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:24:15.336 11:21:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:15.336 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 01:24:15.336 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:15.336 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:24:15.336 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:24:15.594 Cannot find device "nvmf_tgt_br" 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:24:15.594 Cannot find device "nvmf_tgt_br2" 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:24:15.594 Cannot find device "nvmf_tgt_br" 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:24:15.594 Cannot find device "nvmf_tgt_br2" 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:15.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:15.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:15.594 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:24:15.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:15.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 01:24:15.851 01:24:15.851 --- 10.0.0.2 ping statistics --- 01:24:15.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:15.851 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:24:15.851 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:15.851 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 01:24:15.851 01:24:15.851 --- 10.0.0.3 ping statistics --- 01:24:15.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:15.851 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:15.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:15.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:24:15.851 01:24:15.851 --- 10.0.0.1 ping statistics --- 01:24:15.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:15.851 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:24:15.851 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:15.852 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:24:15.852 11:21:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:24:15.852 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:15.852 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:24:15.852 11:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 01:24:15.852 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 01:24:15.852 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 01:24:15.852 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 01:24:15.852 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:24:15.852 11:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 01:24:16.109 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 01:24:16.109 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:24:16.109 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 01:24:16.109 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 01:24:16.367 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 01:24:16.367 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:16.367 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:16.367 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=116342 01:24:16.367 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:24:16.367 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:24:16.367 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 116342 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 116342 ']' 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:16.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 01:24:16.367 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:16.367 [2024-07-22 11:21:21.450844] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:24:16.367 [2024-07-22 11:21:21.450944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:16.625 [2024-07-22 11:21:21.597913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:24:16.625 [2024-07-22 11:21:21.679152] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:16.625 [2024-07-22 11:21:21.679214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:16.625 [2024-07-22 11:21:21.679229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:16.625 [2024-07-22 11:21:21.679240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:16.625 [2024-07-22 11:21:21.679251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:16.625 [2024-07-22 11:21:21.679413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:24:16.625 [2024-07-22 11:21:21.680291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:24:16.625 [2024-07-22 11:21:21.680446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:24:16.625 [2024-07-22 11:21:21.680461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 [2024-07-22 11:21:22.563407] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 [2024-07-22 11:21:22.577592] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 Nvme0n1 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 [2024-07-22 11:21:22.725899] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:17.560 [ 01:24:17.560 { 01:24:17.560 "allow_any_host": true, 01:24:17.560 "hosts": [], 01:24:17.560 "listen_addresses": [], 01:24:17.560 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:24:17.560 "subtype": "Discovery" 01:24:17.560 }, 01:24:17.560 { 01:24:17.560 "allow_any_host": true, 01:24:17.560 "hosts": [], 01:24:17.560 "listen_addresses": [ 01:24:17.560 { 01:24:17.560 "adrfam": "IPv4", 01:24:17.560 "traddr": "10.0.0.2", 01:24:17.560 "trsvcid": "4420", 01:24:17.560 "trtype": "TCP" 01:24:17.560 } 01:24:17.560 ], 01:24:17.560 "max_cntlid": 65519, 01:24:17.560 "max_namespaces": 1, 01:24:17.560 "min_cntlid": 1, 01:24:17.560 "model_number": "SPDK bdev Controller", 01:24:17.560 "namespaces": [ 01:24:17.560 { 01:24:17.560 "bdev_name": "Nvme0n1", 01:24:17.560 "name": "Nvme0n1", 01:24:17.560 "nguid": "3C8F30F01CB94539BA8760A82184032E", 01:24:17.560 "nsid": 1, 01:24:17.560 "uuid": "3c8f30f0-1cb9-4539-ba87-60a82184032e" 01:24:17.560 } 01:24:17.560 ], 01:24:17.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:24:17.560 "serial_number": "SPDK00000000000001", 01:24:17.560 "subtype": "NVMe" 01:24:17.560 } 01:24:17.560 ] 01:24:17.560 11:21:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 01:24:17.560 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:24:17.818 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 01:24:17.818 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:24:17.818 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 01:24:17.818 11:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 01:24:18.076 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 01:24:18.076 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 01:24:18.076 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 01:24:18.076 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:24:18.076 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:18.076 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:18.076 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:18.076 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 01:24:18.076 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 01:24:18.076 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 01:24:18.076 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 01:24:18.076 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:24:18.076 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 01:24:18.076 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 01:24:18.076 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:24:18.076 rmmod nvme_tcp 01:24:18.076 rmmod nvme_fabrics 01:24:18.335 rmmod nvme_keyring 01:24:18.335 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:24:18.335 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 01:24:18.335 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 01:24:18.335 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 116342 ']' 01:24:18.335 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 116342 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 116342 ']' 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 116342 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116342 01:24:18.335 killing process with pid 116342 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116342' 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 116342 01:24:18.335 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 116342 01:24:18.593 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 01:24:18.593 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:24:18.593 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:24:18.593 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:24:18.593 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 01:24:18.593 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:18.593 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:24:18.593 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:18.593 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:24:18.593 01:24:18.593 real 0m3.170s 01:24:18.593 user 0m7.982s 01:24:18.593 sys 0m0.854s 01:24:18.593 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 01:24:18.593 ************************************ 01:24:18.593 END TEST nvmf_identify_passthru 01:24:18.593 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:24:18.593 ************************************ 01:24:18.593 11:21:23 -- common/autotest_common.sh@1142 -- # return 0 01:24:18.593 11:21:23 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:24:18.593 11:21:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:24:18.593 11:21:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:24:18.593 11:21:23 -- common/autotest_common.sh@10 -- # set +x 01:24:18.593 ************************************ 01:24:18.593 START TEST nvmf_dif 01:24:18.593 ************************************ 01:24:18.593 11:21:23 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:24:18.593 * Looking for test storage... 01:24:18.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:24:18.593 11:21:23 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:18.593 11:21:23 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:18.593 11:21:23 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:18.593 11:21:23 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:18.593 11:21:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:18.593 11:21:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:18.593 11:21:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:18.593 11:21:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:24:18.593 11:21:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@47 -- # : 0 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 01:24:18.593 11:21:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:24:18.593 11:21:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:24:18.593 11:21:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:24:18.593 11:21:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:24:18.593 11:21:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:18.593 11:21:23 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:18.594 11:21:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:24:18.594 11:21:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:24:18.594 Cannot find device "nvmf_tgt_br" 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@155 -- # true 01:24:18.594 11:21:23 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:24:18.852 Cannot find device "nvmf_tgt_br2" 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@156 -- # true 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:24:18.852 Cannot find device "nvmf_tgt_br" 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@158 -- # true 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:24:18.852 Cannot find device "nvmf_tgt_br2" 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@159 -- # true 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:18.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@162 -- # true 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:18.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@163 -- # true 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:24:18.852 11:21:23 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:18.852 11:21:24 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:24:19.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:19.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 01:24:19.110 01:24:19.110 --- 10.0.0.2 ping statistics --- 01:24:19.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:19.110 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:24:19.110 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:19.110 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:24:19.110 01:24:19.110 --- 10.0.0.3 ping statistics --- 01:24:19.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:19.110 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:19.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:19.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:24:19.110 01:24:19.110 --- 10.0.0.1 ping statistics --- 01:24:19.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:19.110 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@433 -- # return 0 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:24:19.110 11:21:24 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:24:19.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:24:19.369 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:24:19.369 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:24:19.369 11:21:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:24:19.369 11:21:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=116684 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 116684 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 116684 ']' 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:19.369 11:21:24 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 01:24:19.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 01:24:19.369 11:21:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:24:19.369 [2024-07-22 11:21:24.564341] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:24:19.369 [2024-07-22 11:21:24.564977] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:19.628 [2024-07-22 11:21:24.708327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:19.628 [2024-07-22 11:21:24.794005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:19.628 [2024-07-22 11:21:24.794055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:19.628 [2024-07-22 11:21:24.794069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:19.628 [2024-07-22 11:21:24.794080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:19.628 [2024-07-22 11:21:24.794089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:19.628 [2024-07-22 11:21:24.794126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 01:24:20.563 11:21:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:24:20.563 11:21:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:20.563 11:21:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:24:20.563 11:21:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:24:20.563 [2024-07-22 11:21:25.626285] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:20.563 11:21:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:24:20.563 11:21:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:24:20.563 ************************************ 01:24:20.563 START TEST fio_dif_1_default 01:24:20.563 ************************************ 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:24:20.563 bdev_null0 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:24:20.563 [2024-07-22 11:21:25.674442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:24:20.563 { 01:24:20.563 "params": { 01:24:20.563 "name": "Nvme$subsystem", 01:24:20.563 "trtype": "$TEST_TRANSPORT", 01:24:20.563 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:20.563 "adrfam": "ipv4", 01:24:20.563 "trsvcid": "$NVMF_PORT", 01:24:20.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:20.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:20.563 "hdgst": ${hdgst:-false}, 01:24:20.563 "ddgst": ${ddgst:-false} 01:24:20.563 }, 01:24:20.563 "method": "bdev_nvme_attach_controller" 01:24:20.563 } 01:24:20.563 EOF 01:24:20.563 )") 01:24:20.563 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:24:20.564 "params": { 01:24:20.564 "name": "Nvme0", 01:24:20.564 "trtype": "tcp", 01:24:20.564 "traddr": "10.0.0.2", 01:24:20.564 "adrfam": "ipv4", 01:24:20.564 "trsvcid": "4420", 01:24:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:24:20.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:24:20.564 "hdgst": false, 01:24:20.564 "ddgst": false 01:24:20.564 }, 01:24:20.564 "method": "bdev_nvme_attach_controller" 01:24:20.564 }' 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:24:20.564 11:21:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:20.822 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:24:20.822 fio-3.35 01:24:20.822 Starting 1 thread 01:24:33.046 01:24:33.046 filename0: (groupid=0, jobs=1): err= 0: pid=116773: Mon Jul 22 11:21:36 2024 01:24:33.046 read: IOPS=3677, BW=14.4MiB/s (15.1MB/s)(144MiB/10010msec) 01:24:33.046 slat (nsec): min=5853, max=55747, avg=6946.80, stdev=2413.77 01:24:33.046 clat (usec): min=349, max=42572, avg=1067.04, stdev=5032.45 01:24:33.046 lat (usec): min=355, max=42580, avg=1073.99, stdev=5032.52 01:24:33.046 clat percentiles (usec): 01:24:33.046 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 396], 01:24:33.046 | 30.00th=[ 400], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 424], 01:24:33.046 | 70.00th=[ 445], 80.00th=[ 478], 90.00th=[ 529], 95.00th=[ 578], 01:24:33.046 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 01:24:33.046 | 99.99th=[42730] 01:24:33.046 bw ( KiB/s): min= 8032, max=26272, per=100.00%, avg=14723.20, stdev=4834.65, samples=20 01:24:33.046 iops : min= 2008, max= 6568, avg=3680.80, stdev=1208.66, samples=20 01:24:33.046 lat (usec) : 500=85.39%, 750=13.03%, 1000=0.01% 01:24:33.046 lat (msec) : 2=0.01%, 10=0.01%, 50=1.55% 01:24:33.046 cpu : usr=88.91%, sys=9.41%, ctx=26, majf=0, minf=9 01:24:33.046 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:24:33.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:33.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:33.046 issued rwts: total=36812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:33.046 latency : target=0, window=0, percentile=100.00%, depth=4 01:24:33.046 01:24:33.046 Run status group 0 (all jobs): 01:24:33.046 READ: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io=144MiB (151MB), run=10010-10010msec 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.046 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 01:24:33.047 real 0m10.967s 01:24:33.047 user 0m9.506s 01:24:33.047 sys 0m1.205s 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 ************************************ 01:24:33.047 END TEST fio_dif_1_default 01:24:33.047 ************************************ 01:24:33.047 11:21:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:24:33.047 11:21:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:24:33.047 11:21:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:24:33.047 11:21:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 ************************************ 01:24:33.047 START TEST fio_dif_1_multi_subsystems 01:24:33.047 ************************************ 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 bdev_null0 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 [2024-07-22 11:21:36.693734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 bdev_null1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:24:33.047 { 01:24:33.047 "params": { 01:24:33.047 "name": "Nvme$subsystem", 01:24:33.047 "trtype": "$TEST_TRANSPORT", 01:24:33.047 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:33.047 "adrfam": "ipv4", 01:24:33.047 "trsvcid": "$NVMF_PORT", 01:24:33.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:33.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:33.047 "hdgst": ${hdgst:-false}, 01:24:33.047 "ddgst": ${ddgst:-false} 01:24:33.047 }, 01:24:33.047 "method": "bdev_nvme_attach_controller" 01:24:33.047 } 01:24:33.047 EOF 01:24:33.047 )") 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:24:33.047 { 01:24:33.047 "params": { 01:24:33.047 "name": "Nvme$subsystem", 01:24:33.047 "trtype": "$TEST_TRANSPORT", 01:24:33.047 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:33.047 "adrfam": "ipv4", 01:24:33.047 "trsvcid": "$NVMF_PORT", 01:24:33.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:33.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:33.047 "hdgst": ${hdgst:-false}, 01:24:33.047 "ddgst": ${ddgst:-false} 01:24:33.047 }, 01:24:33.047 "method": "bdev_nvme_attach_controller" 01:24:33.047 } 01:24:33.047 EOF 01:24:33.047 )") 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:24:33.047 "params": { 01:24:33.047 "name": "Nvme0", 01:24:33.047 "trtype": "tcp", 01:24:33.047 "traddr": "10.0.0.2", 01:24:33.047 "adrfam": "ipv4", 01:24:33.047 "trsvcid": "4420", 01:24:33.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:24:33.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:24:33.047 "hdgst": false, 01:24:33.047 "ddgst": false 01:24:33.047 }, 01:24:33.047 "method": "bdev_nvme_attach_controller" 01:24:33.047 },{ 01:24:33.047 "params": { 01:24:33.047 "name": "Nvme1", 01:24:33.047 "trtype": "tcp", 01:24:33.047 "traddr": "10.0.0.2", 01:24:33.047 "adrfam": "ipv4", 01:24:33.047 "trsvcid": "4420", 01:24:33.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:24:33.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:24:33.047 "hdgst": false, 01:24:33.047 "ddgst": false 01:24:33.047 }, 01:24:33.047 "method": "bdev_nvme_attach_controller" 01:24:33.047 }' 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:33.047 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:24:33.048 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:33.048 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:33.048 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:33.048 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:24:33.048 11:21:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:33.048 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:24:33.048 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:24:33.048 fio-3.35 01:24:33.048 Starting 2 threads 01:24:43.019 01:24:43.019 filename0: (groupid=0, jobs=1): err= 0: pid=116928: Mon Jul 22 11:21:47 2024 01:24:43.019 read: IOPS=781, BW=3128KiB/s (3203kB/s)(30.7MiB/10041msec) 01:24:43.019 slat (nsec): min=5863, max=66363, avg=7792.66, stdev=3580.87 01:24:43.019 clat (usec): min=361, max=42425, avg=5091.57, stdev=12810.81 01:24:43.019 lat (usec): min=368, max=42434, avg=5099.36, stdev=12811.04 01:24:43.019 clat percentiles (usec): 01:24:43.019 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 424], 01:24:43.019 | 30.00th=[ 445], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[ 529], 01:24:43.019 | 70.00th=[ 562], 80.00th=[ 635], 90.00th=[40633], 95.00th=[41157], 01:24:43.019 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 01:24:43.019 | 99.99th=[42206] 01:24:43.019 bw ( KiB/s): min= 576, max= 6400, per=50.92%, avg=3139.20, stdev=2215.51, samples=20 01:24:43.019 iops : min= 144, max= 1600, avg=784.80, stdev=553.88, samples=20 01:24:43.019 lat (usec) : 500=49.20%, 750=32.68%, 1000=6.25% 01:24:43.019 lat (msec) : 2=0.51%, 4=0.05%, 50=11.31% 01:24:43.019 cpu : usr=95.49%, sys=3.80%, ctx=26, majf=0, minf=0 01:24:43.019 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:24:43.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:43.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:43.019 issued rwts: total=7852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:43.019 latency : target=0, window=0, percentile=100.00%, depth=4 01:24:43.019 filename1: (groupid=0, jobs=1): err= 0: pid=116929: Mon Jul 22 11:21:47 2024 01:24:43.019 read: IOPS=761, BW=3046KiB/s (3119kB/s)(29.8MiB/10011msec) 01:24:43.020 slat (nsec): min=5854, max=57871, avg=7686.80, stdev=3504.21 01:24:43.020 clat (usec): min=370, max=42534, avg=5229.29, stdev=12985.89 01:24:43.020 lat (usec): min=376, max=42543, avg=5236.98, stdev=12986.17 01:24:43.020 clat percentiles (usec): 01:24:43.020 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 424], 01:24:43.020 | 30.00th=[ 441], 40.00th=[ 469], 50.00th=[ 494], 60.00th=[ 519], 01:24:43.020 | 70.00th=[ 562], 80.00th=[ 685], 90.00th=[40633], 95.00th=[41157], 01:24:43.020 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[42730], 01:24:43.020 | 99.99th=[42730] 01:24:43.020 bw ( KiB/s): min= 512, max= 6944, per=49.44%, avg=3048.00, stdev=2289.97, samples=20 01:24:43.020 iops : min= 128, max= 1736, avg=762.00, stdev=572.49, samples=20 01:24:43.020 lat (usec) : 500=52.99%, 750=28.02%, 1000=6.81% 01:24:43.020 lat (msec) : 2=0.49%, 4=0.05%, 50=11.65% 01:24:43.020 cpu : usr=95.49%, sys=3.79%, ctx=8, majf=0, minf=0 01:24:43.020 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:24:43.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:43.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:43.020 issued rwts: total=7624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:43.020 latency : target=0, window=0, percentile=100.00%, depth=4 01:24:43.020 01:24:43.020 Run status group 0 (all jobs): 01:24:43.020 READ: bw=6165KiB/s (6313kB/s), 3046KiB/s-3128KiB/s (3119kB/s-3203kB/s), io=60.5MiB (63.4MB), run=10011-10041msec 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 01:24:43.020 real 0m11.192s 01:24:43.020 user 0m19.942s 01:24:43.020 sys 0m1.058s 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 01:24:43.020 ************************************ 01:24:43.020 END TEST fio_dif_1_multi_subsystems 01:24:43.020 ************************************ 01:24:43.020 11:21:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 11:21:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:24:43.020 11:21:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:24:43.020 11:21:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:24:43.020 11:21:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 ************************************ 01:24:43.020 START TEST fio_dif_rand_params 01:24:43.020 ************************************ 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 bdev_null0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:43.020 [2024-07-22 11:21:47.939555] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:24:43.020 { 01:24:43.020 "params": { 01:24:43.020 "name": "Nvme$subsystem", 01:24:43.020 "trtype": "$TEST_TRANSPORT", 01:24:43.020 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:43.020 "adrfam": "ipv4", 01:24:43.020 "trsvcid": "$NVMF_PORT", 01:24:43.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:43.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:43.020 "hdgst": ${hdgst:-false}, 01:24:43.020 "ddgst": ${ddgst:-false} 01:24:43.020 }, 01:24:43.020 "method": "bdev_nvme_attach_controller" 01:24:43.020 } 01:24:43.020 EOF 01:24:43.020 )") 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:24:43.020 "params": { 01:24:43.020 "name": "Nvme0", 01:24:43.020 "trtype": "tcp", 01:24:43.020 "traddr": "10.0.0.2", 01:24:43.020 "adrfam": "ipv4", 01:24:43.020 "trsvcid": "4420", 01:24:43.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:24:43.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:24:43.020 "hdgst": false, 01:24:43.020 "ddgst": false 01:24:43.020 }, 01:24:43.020 "method": "bdev_nvme_attach_controller" 01:24:43.020 }' 01:24:43.020 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:43.021 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:43.021 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:43.021 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:43.021 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:43.021 11:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:24:43.021 11:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:43.021 11:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:43.021 11:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:24:43.021 11:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:43.021 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:24:43.021 ... 01:24:43.021 fio-3.35 01:24:43.021 Starting 3 threads 01:24:49.584 01:24:49.584 filename0: (groupid=0, jobs=1): err= 0: pid=117086: Mon Jul 22 11:21:53 2024 01:24:49.584 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(158MiB/5008msec) 01:24:49.584 slat (nsec): min=5887, max=61226, avg=15936.65, stdev=7353.27 01:24:49.584 clat (usec): min=4103, max=55369, avg=11901.09, stdev=11348.67 01:24:49.585 lat (usec): min=4109, max=55417, avg=11917.03, stdev=11348.35 01:24:49.585 clat percentiles (usec): 01:24:49.585 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 7046], 01:24:49.585 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 01:24:49.585 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[11863], 95.00th=[48497], 01:24:49.585 | 99.00th=[51643], 99.50th=[51643], 99.90th=[52167], 99.95th=[55313], 01:24:49.585 | 99.99th=[55313] 01:24:49.585 bw ( KiB/s): min=19968, max=43520, per=30.85%, avg=32174.20, stdev=8368.64, samples=10 01:24:49.585 iops : min= 156, max= 340, avg=251.30, stdev=65.44, samples=10 01:24:49.585 lat (msec) : 10=82.06%, 20=9.60%, 50=5.56%, 100=2.78% 01:24:49.585 cpu : usr=94.67%, sys=4.01%, ctx=11, majf=0, minf=0 01:24:49.585 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:24:49.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:49.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:49.585 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:49.585 latency : target=0, window=0, percentile=100.00%, depth=3 01:24:49.585 filename0: (groupid=0, jobs=1): err= 0: pid=117087: Mon Jul 22 11:21:53 2024 01:24:49.585 read: IOPS=316, BW=39.5MiB/s (41.4MB/s)(198MiB/5006msec) 01:24:49.585 slat (nsec): min=5942, max=55525, avg=13482.17, stdev=6223.57 01:24:49.585 clat (usec): min=3386, max=48165, avg=9472.53, stdev=4161.32 01:24:49.585 lat (usec): min=3396, max=48181, avg=9486.01, stdev=4163.04 01:24:49.585 clat percentiles (usec): 01:24:49.585 | 1.00th=[ 3687], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 6587], 01:24:49.585 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[11207], 01:24:49.585 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13173], 95.00th=[14222], 01:24:49.585 | 99.00th=[16909], 99.50th=[18482], 99.90th=[47973], 99.95th=[47973], 01:24:49.585 | 99.99th=[47973] 01:24:49.585 bw ( KiB/s): min=32256, max=52992, per=38.80%, avg=40456.60, stdev=6997.76, samples=10 01:24:49.585 iops : min= 252, max= 414, avg=316.00, stdev=54.64, samples=10 01:24:49.585 lat (msec) : 4=13.46%, 10=40.90%, 20=45.26%, 50=0.38% 01:24:49.585 cpu : usr=92.53%, sys=5.35%, ctx=7, majf=0, minf=0 01:24:49.585 IO depths : 1=7.4%, 2=92.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:24:49.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:49.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:49.585 issued rwts: total=1582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:49.585 latency : target=0, window=0, percentile=100.00%, depth=3 01:24:49.585 filename0: (groupid=0, jobs=1): err= 0: pid=117088: Mon Jul 22 11:21:53 2024 01:24:49.585 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(155MiB/5007msec) 01:24:49.585 slat (nsec): min=4306, max=51492, avg=11993.94, stdev=6335.57 01:24:49.585 clat (usec): min=3650, max=55421, avg=12107.04, stdev=10385.19 01:24:49.585 lat (usec): min=3657, max=55433, avg=12119.04, stdev=10385.30 01:24:49.585 clat percentiles (usec): 01:24:49.585 | 1.00th=[ 3720], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 6783], 01:24:49.585 | 30.00th=[ 7701], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10814], 01:24:49.585 | 70.00th=[11076], 80.00th=[11469], 90.00th=[13173], 95.00th=[47449], 01:24:49.585 | 99.00th=[52167], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 01:24:49.585 | 99.99th=[55313] 01:24:49.585 bw ( KiB/s): min=23296, max=41298, per=30.37%, avg=31668.00, stdev=5394.95, samples=10 01:24:49.585 iops : min= 182, max= 322, avg=247.30, stdev=41.98, samples=10 01:24:49.585 lat (msec) : 4=2.26%, 10=38.93%, 20=52.02%, 50=3.15%, 100=3.63% 01:24:49.585 cpu : usr=92.63%, sys=5.43%, ctx=4, majf=0, minf=0 01:24:49.585 IO depths : 1=9.2%, 2=90.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:24:49.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:49.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:49.585 issued rwts: total=1238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:49.585 latency : target=0, window=0, percentile=100.00%, depth=3 01:24:49.585 01:24:49.585 Run status group 0 (all jobs): 01:24:49.585 READ: bw=102MiB/s (107MB/s), 30.9MiB/s-39.5MiB/s (32.4MB/s-41.4MB/s), io=510MiB (535MB), run=5006-5008msec 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 bdev_null0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 [2024-07-22 11:21:53.939024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 bdev_null1 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 bdev_null2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:24:49.585 11:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:24:49.586 { 01:24:49.586 "params": { 01:24:49.586 "name": "Nvme$subsystem", 01:24:49.586 "trtype": "$TEST_TRANSPORT", 01:24:49.586 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:49.586 "adrfam": "ipv4", 01:24:49.586 "trsvcid": "$NVMF_PORT", 01:24:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:49.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:49.586 "hdgst": ${hdgst:-false}, 01:24:49.586 "ddgst": ${ddgst:-false} 01:24:49.586 }, 01:24:49.586 "method": "bdev_nvme_attach_controller" 01:24:49.586 } 01:24:49.586 EOF 01:24:49.586 )") 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:24:49.586 { 01:24:49.586 "params": { 01:24:49.586 "name": "Nvme$subsystem", 01:24:49.586 "trtype": "$TEST_TRANSPORT", 01:24:49.586 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:49.586 "adrfam": "ipv4", 01:24:49.586 "trsvcid": "$NVMF_PORT", 01:24:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:49.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:49.586 "hdgst": ${hdgst:-false}, 01:24:49.586 "ddgst": ${ddgst:-false} 01:24:49.586 }, 01:24:49.586 "method": "bdev_nvme_attach_controller" 01:24:49.586 } 01:24:49.586 EOF 01:24:49.586 )") 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:24:49.586 { 01:24:49.586 "params": { 01:24:49.586 "name": "Nvme$subsystem", 01:24:49.586 "trtype": "$TEST_TRANSPORT", 01:24:49.586 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:49.586 "adrfam": "ipv4", 01:24:49.586 "trsvcid": "$NVMF_PORT", 01:24:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:49.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:49.586 "hdgst": ${hdgst:-false}, 01:24:49.586 "ddgst": ${ddgst:-false} 01:24:49.586 }, 01:24:49.586 "method": "bdev_nvme_attach_controller" 01:24:49.586 } 01:24:49.586 EOF 01:24:49.586 )") 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:24:49.586 "params": { 01:24:49.586 "name": "Nvme0", 01:24:49.586 "trtype": "tcp", 01:24:49.586 "traddr": "10.0.0.2", 01:24:49.586 "adrfam": "ipv4", 01:24:49.586 "trsvcid": "4420", 01:24:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:24:49.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:24:49.586 "hdgst": false, 01:24:49.586 "ddgst": false 01:24:49.586 }, 01:24:49.586 "method": "bdev_nvme_attach_controller" 01:24:49.586 },{ 01:24:49.586 "params": { 01:24:49.586 "name": "Nvme1", 01:24:49.586 "trtype": "tcp", 01:24:49.586 "traddr": "10.0.0.2", 01:24:49.586 "adrfam": "ipv4", 01:24:49.586 "trsvcid": "4420", 01:24:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:24:49.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:24:49.586 "hdgst": false, 01:24:49.586 "ddgst": false 01:24:49.586 }, 01:24:49.586 "method": "bdev_nvme_attach_controller" 01:24:49.586 },{ 01:24:49.586 "params": { 01:24:49.586 "name": "Nvme2", 01:24:49.586 "trtype": "tcp", 01:24:49.586 "traddr": "10.0.0.2", 01:24:49.586 "adrfam": "ipv4", 01:24:49.586 "trsvcid": "4420", 01:24:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:24:49.586 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:24:49.586 "hdgst": false, 01:24:49.586 "ddgst": false 01:24:49.586 }, 01:24:49.586 "method": "bdev_nvme_attach_controller" 01:24:49.586 }' 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:24:49.586 11:21:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:24:49.586 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:24:49.586 ... 01:24:49.586 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:24:49.586 ... 01:24:49.586 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:24:49.586 ... 01:24:49.586 fio-3.35 01:24:49.586 Starting 24 threads 01:25:01.799 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117179: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=223, BW=895KiB/s (916kB/s)(8988KiB/10044msec) 01:25:01.800 slat (nsec): min=3334, max=55395, avg=12693.44, stdev=8009.70 01:25:01.800 clat (msec): min=31, max=192, avg=71.31, stdev=22.54 01:25:01.800 lat (msec): min=31, max=192, avg=71.33, stdev=22.54 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 01:25:01.800 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 73], 01:25:01.800 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 110], 01:25:01.800 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 194], 99.95th=[ 194], 01:25:01.800 | 99.99th=[ 194] 01:25:01.800 bw ( KiB/s): min= 640, max= 1399, per=4.04%, avg=892.00, stdev=170.31, samples=20 01:25:01.800 iops : min= 160, max= 349, avg=222.95, stdev=42.46, samples=20 01:25:01.800 lat (msec) : 50=19.40%, 100=69.16%, 250=11.44% 01:25:01.800 cpu : usr=35.49%, sys=0.66%, ctx=980, majf=0, minf=9 01:25:01.800 IO depths : 1=1.0%, 2=2.4%, 4=10.2%, 8=73.7%, 16=12.6%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.800 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117180: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=247, BW=991KiB/s (1015kB/s)(9956KiB/10047msec) 01:25:01.800 slat (usec): min=4, max=11016, avg=26.09, stdev=330.02 01:25:01.800 clat (msec): min=22, max=153, avg=64.35, stdev=19.75 01:25:01.800 lat (msec): min=22, max=153, avg=64.38, stdev=19.76 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 47], 01:25:01.800 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 69], 01:25:01.800 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 103], 01:25:01.800 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 155], 01:25:01.800 | 99.99th=[ 155] 01:25:01.800 bw ( KiB/s): min= 640, max= 1280, per=4.47%, avg=988.85, stdev=173.32, samples=20 01:25:01.800 iops : min= 160, max= 320, avg=247.20, stdev=43.32, samples=20 01:25:01.800 lat (msec) : 50=27.92%, 100=65.77%, 250=6.31% 01:25:01.800 cpu : usr=44.65%, sys=0.89%, ctx=1217, majf=0, minf=9 01:25:01.800 IO depths : 1=2.2%, 2=4.6%, 4=12.5%, 8=69.5%, 16=11.2%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.800 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117181: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=227, BW=910KiB/s (932kB/s)(9116KiB/10020msec) 01:25:01.800 slat (usec): min=3, max=8036, avg=18.86, stdev=237.79 01:25:01.800 clat (msec): min=25, max=150, avg=70.23, stdev=24.85 01:25:01.800 lat (msec): min=25, max=150, avg=70.24, stdev=24.85 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 01:25:01.800 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 70], 60.00th=[ 72], 01:25:01.800 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 117], 01:25:01.800 | 99.00th=[ 134], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 150], 01:25:01.800 | 99.99th=[ 150] 01:25:01.800 bw ( KiB/s): min= 640, max= 1256, per=4.06%, avg=898.11, stdev=185.57, samples=19 01:25:01.800 iops : min= 160, max= 314, avg=224.53, stdev=46.39, samples=19 01:25:01.800 lat (msec) : 50=25.93%, 100=60.33%, 250=13.73% 01:25:01.800 cpu : usr=34.05%, sys=0.57%, ctx=910, majf=0, minf=9 01:25:01.800 IO depths : 1=0.7%, 2=1.5%, 4=7.9%, 8=76.6%, 16=13.3%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 complete : 0=0.0%, 4=89.2%, 8=6.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.800 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117182: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=233, BW=934KiB/s (956kB/s)(9356KiB/10019msec) 01:25:01.800 slat (usec): min=6, max=8018, avg=22.80, stdev=259.84 01:25:01.800 clat (msec): min=27, max=156, avg=68.31, stdev=21.73 01:25:01.800 lat (msec): min=27, max=156, avg=68.34, stdev=21.73 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 49], 01:25:01.800 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 01:25:01.800 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 104], 95.00th=[ 109], 01:25:01.800 | 99.00th=[ 126], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 157], 01:25:01.800 | 99.99th=[ 157] 01:25:01.800 bw ( KiB/s): min= 512, max= 1200, per=4.22%, avg=933.60, stdev=186.68, samples=20 01:25:01.800 iops : min= 128, max= 300, avg=233.40, stdev=46.67, samples=20 01:25:01.800 lat (msec) : 50=21.98%, 100=66.74%, 250=11.29% 01:25:01.800 cpu : usr=42.00%, sys=0.72%, ctx=1312, majf=0, minf=9 01:25:01.800 IO depths : 1=2.4%, 2=5.0%, 4=13.2%, 8=68.3%, 16=11.2%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 complete : 0=0.0%, 4=91.0%, 8=4.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 issued rwts: total=2339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.800 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117183: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=203, BW=813KiB/s (832kB/s)(8140KiB/10013msec) 01:25:01.800 slat (usec): min=3, max=8070, avg=20.84, stdev=251.89 01:25:01.800 clat (msec): min=18, max=161, avg=78.54, stdev=20.93 01:25:01.800 lat (msec): min=18, max=161, avg=78.57, stdev=20.92 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 37], 5.00th=[ 49], 10.00th=[ 57], 20.00th=[ 62], 01:25:01.800 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 01:25:01.800 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 01:25:01.800 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 163], 99.95th=[ 163], 01:25:01.800 | 99.99th=[ 163] 01:25:01.800 bw ( KiB/s): min= 592, max= 976, per=3.66%, avg=809.47, stdev=102.84, samples=19 01:25:01.800 iops : min= 148, max= 244, avg=202.37, stdev=25.71, samples=19 01:25:01.800 lat (msec) : 20=0.10%, 50=6.68%, 100=77.44%, 250=15.77% 01:25:01.800 cpu : usr=38.38%, sys=0.81%, ctx=1179, majf=0, minf=9 01:25:01.800 IO depths : 1=2.9%, 2=6.2%, 4=16.9%, 8=63.7%, 16=10.3%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.800 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117184: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=267, BW=1070KiB/s (1095kB/s)(10.5MiB/10055msec) 01:25:01.800 slat (usec): min=4, max=8021, avg=20.10, stdev=206.10 01:25:01.800 clat (msec): min=2, max=115, avg=59.60, stdev=19.36 01:25:01.800 lat (msec): min=2, max=115, avg=59.62, stdev=19.35 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 45], 01:25:01.800 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 62], 01:25:01.800 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 93], 01:25:01.800 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 116], 99.95th=[ 116], 01:25:01.800 | 99.99th=[ 116] 01:25:01.800 bw ( KiB/s): min= 840, max= 1704, per=4.85%, avg=1071.10, stdev=196.85, samples=20 01:25:01.800 iops : min= 210, max= 426, avg=267.75, stdev=49.23, samples=20 01:25:01.800 lat (msec) : 4=0.78%, 10=1.41%, 20=0.19%, 50=31.57%, 100=64.11% 01:25:01.800 lat (msec) : 250=1.93% 01:25:01.800 cpu : usr=34.80%, sys=0.61%, ctx=1062, majf=0, minf=9 01:25:01.800 IO depths : 1=0.3%, 2=0.6%, 4=5.8%, 8=79.8%, 16=13.5%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 complete : 0=0.0%, 4=89.0%, 8=6.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 issued rwts: total=2689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.800 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117185: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=222, BW=889KiB/s (910kB/s)(8904KiB/10015msec) 01:25:01.800 slat (usec): min=4, max=8033, avg=22.96, stdev=282.41 01:25:01.800 clat (msec): min=16, max=140, avg=71.81, stdev=18.72 01:25:01.800 lat (msec): min=16, max=140, avg=71.83, stdev=18.72 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 01:25:01.800 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 01:25:01.800 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 107], 01:25:01.800 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 140], 99.95th=[ 140], 01:25:01.800 | 99.99th=[ 140] 01:25:01.800 bw ( KiB/s): min= 768, max= 1096, per=4.01%, avg=887.47, stdev=84.34, samples=19 01:25:01.800 iops : min= 192, max= 274, avg=221.84, stdev=21.08, samples=19 01:25:01.800 lat (msec) : 20=0.27%, 50=11.99%, 100=79.78%, 250=7.95% 01:25:01.800 cpu : usr=33.21%, sys=0.64%, ctx=890, majf=0, minf=9 01:25:01.800 IO depths : 1=1.8%, 2=4.6%, 4=14.2%, 8=68.1%, 16=11.2%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.800 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.800 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.800 filename0: (groupid=0, jobs=1): err= 0: pid=117186: Mon Jul 22 11:22:05 2024 01:25:01.800 read: IOPS=238, BW=953KiB/s (976kB/s)(9552KiB/10025msec) 01:25:01.800 slat (usec): min=6, max=5033, avg=21.02, stdev=184.16 01:25:01.800 clat (msec): min=28, max=138, avg=67.03, stdev=21.12 01:25:01.800 lat (msec): min=28, max=138, avg=67.05, stdev=21.12 01:25:01.800 clat percentiles (msec): 01:25:01.800 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 01:25:01.800 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 01:25:01.800 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 01:25:01.800 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 01:25:01.800 | 99.99th=[ 140] 01:25:01.800 bw ( KiB/s): min= 600, max= 1232, per=4.29%, avg=948.80, stdev=186.24, samples=20 01:25:01.800 iops : min= 150, max= 308, avg=237.20, stdev=46.56, samples=20 01:25:01.800 lat (msec) : 50=22.91%, 100=68.30%, 250=8.79% 01:25:01.800 cpu : usr=42.47%, sys=0.82%, ctx=1480, majf=0, minf=9 01:25:01.800 IO depths : 1=1.3%, 2=2.8%, 4=10.1%, 8=73.7%, 16=12.1%, 32=0.0%, >=64=0.0% 01:25:01.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117187: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=259, BW=1037KiB/s (1062kB/s)(10.2MiB/10029msec) 01:25:01.801 slat (usec): min=5, max=8057, avg=23.00, stdev=288.99 01:25:01.801 clat (msec): min=25, max=130, avg=61.49, stdev=18.38 01:25:01.801 lat (msec): min=25, max=130, avg=61.51, stdev=18.39 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 01:25:01.801 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 64], 01:25:01.801 | 70.00th=[ 70], 80.00th=[ 78], 90.00th=[ 88], 95.00th=[ 95], 01:25:01.801 | 99.00th=[ 114], 99.50th=[ 118], 99.90th=[ 131], 99.95th=[ 131], 01:25:01.801 | 99.99th=[ 131] 01:25:01.801 bw ( KiB/s): min= 720, max= 1272, per=4.68%, avg=1034.00, stdev=153.97, samples=20 01:25:01.801 iops : min= 180, max= 318, avg=258.50, stdev=38.49, samples=20 01:25:01.801 lat (msec) : 50=31.91%, 100=64.63%, 250=3.46% 01:25:01.801 cpu : usr=38.64%, sys=0.85%, ctx=1339, majf=0, minf=9 01:25:01.801 IO depths : 1=0.9%, 2=2.2%, 4=9.2%, 8=75.2%, 16=12.5%, 32=0.0%, >=64=0.0% 01:25:01.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117188: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=217, BW=872KiB/s (893kB/s)(8728KiB/10013msec) 01:25:01.801 slat (usec): min=3, max=8029, avg=26.54, stdev=312.42 01:25:01.801 clat (msec): min=29, max=132, avg=73.22, stdev=20.98 01:25:01.801 lat (msec): min=29, max=132, avg=73.25, stdev=20.98 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 01:25:01.801 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 01:25:01.801 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 109], 01:25:01.801 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 133], 99.95th=[ 133], 01:25:01.801 | 99.99th=[ 133] 01:25:01.801 bw ( KiB/s): min= 640, max= 1072, per=3.89%, avg=860.42, stdev=130.81, samples=19 01:25:01.801 iops : min= 160, max= 268, avg=215.11, stdev=32.70, samples=19 01:25:01.801 lat (msec) : 50=14.94%, 100=70.85%, 250=14.21% 01:25:01.801 cpu : usr=42.67%, sys=0.72%, ctx=956, majf=0, minf=9 01:25:01.801 IO depths : 1=2.2%, 2=4.9%, 4=14.0%, 8=67.9%, 16=11.0%, 32=0.0%, >=64=0.0% 01:25:01.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117189: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=235, BW=941KiB/s (964kB/s)(9468KiB/10060msec) 01:25:01.801 slat (usec): min=4, max=8033, avg=23.30, stdev=287.28 01:25:01.801 clat (msec): min=5, max=153, avg=67.86, stdev=25.40 01:25:01.801 lat (msec): min=5, max=153, avg=67.88, stdev=25.40 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 47], 01:25:01.801 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 67], 60.00th=[ 71], 01:25:01.801 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 121], 01:25:01.801 | 99.00th=[ 134], 99.50th=[ 134], 99.90th=[ 155], 99.95th=[ 155], 01:25:01.801 | 99.99th=[ 155] 01:25:01.801 bw ( KiB/s): min= 640, max= 1664, per=4.25%, avg=939.85, stdev=233.66, samples=20 01:25:01.801 iops : min= 160, max= 416, avg=234.95, stdev=58.40, samples=20 01:25:01.801 lat (msec) : 10=1.35%, 20=0.68%, 50=24.00%, 100=61.68%, 250=12.29% 01:25:01.801 cpu : usr=37.28%, sys=0.78%, ctx=1229, majf=0, minf=9 01:25:01.801 IO depths : 1=1.9%, 2=4.1%, 4=13.3%, 8=69.4%, 16=11.4%, 32=0.0%, >=64=0.0% 01:25:01.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117190: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=205, BW=820KiB/s (840kB/s)(8212KiB/10011msec) 01:25:01.801 slat (usec): min=4, max=8047, avg=27.23, stdev=343.41 01:25:01.801 clat (msec): min=16, max=172, avg=77.82, stdev=22.46 01:25:01.801 lat (msec): min=16, max=172, avg=77.84, stdev=22.45 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 31], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 61], 01:25:01.801 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 82], 01:25:01.801 | 70.00th=[ 87], 80.00th=[ 94], 90.00th=[ 109], 95.00th=[ 118], 01:25:01.801 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 174], 99.95th=[ 174], 01:25:01.801 | 99.99th=[ 174] 01:25:01.801 bw ( KiB/s): min= 616, max= 1074, per=3.70%, avg=817.42, stdev=109.32, samples=19 01:25:01.801 iops : min= 154, max= 268, avg=204.32, stdev=27.26, samples=19 01:25:01.801 lat (msec) : 20=0.78%, 50=8.96%, 100=74.82%, 250=15.44% 01:25:01.801 cpu : usr=33.25%, sys=0.60%, ctx=883, majf=0, minf=9 01:25:01.801 IO depths : 1=1.7%, 2=3.8%, 4=11.5%, 8=71.2%, 16=11.8%, 32=0.0%, >=64=0.0% 01:25:01.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117191: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=204, BW=818KiB/s (837kB/s)(8192KiB/10018msec) 01:25:01.801 slat (usec): min=4, max=8051, avg=17.24, stdev=177.82 01:25:01.801 clat (msec): min=33, max=195, avg=78.14, stdev=22.25 01:25:01.801 lat (msec): min=33, max=195, avg=78.16, stdev=22.25 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 61], 01:25:01.801 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 82], 01:25:01.801 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 01:25:01.801 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 197], 01:25:01.801 | 99.99th=[ 197] 01:25:01.801 bw ( KiB/s): min= 512, max= 1072, per=3.69%, avg=816.05, stdev=122.41, samples=19 01:25:01.801 iops : min= 128, max= 268, avg=204.00, stdev=30.60, samples=19 01:25:01.801 lat (msec) : 50=6.30%, 100=79.05%, 250=14.65% 01:25:01.801 cpu : usr=33.82%, sys=0.72%, ctx=927, majf=0, minf=9 01:25:01.801 IO depths : 1=2.5%, 2=5.9%, 4=16.1%, 8=65.1%, 16=10.3%, 32=0.0%, >=64=0.0% 01:25:01.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117192: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.3MiB/10054msec) 01:25:01.801 slat (usec): min=4, max=8031, avg=20.32, stdev=236.28 01:25:01.801 clat (msec): min=3, max=137, avg=60.94, stdev=20.22 01:25:01.801 lat (msec): min=3, max=137, avg=60.96, stdev=20.22 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 01:25:01.801 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 64], 01:25:01.801 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 95], 01:25:01.801 | 99.00th=[ 117], 99.50th=[ 127], 99.90th=[ 138], 99.95th=[ 138], 01:25:01.801 | 99.99th=[ 138] 01:25:01.801 bw ( KiB/s): min= 792, max= 1729, per=4.74%, avg=1047.50, stdev=214.25, samples=20 01:25:01.801 iops : min= 198, max= 432, avg=261.85, stdev=53.51, samples=20 01:25:01.801 lat (msec) : 4=0.27%, 10=2.09%, 20=0.08%, 50=30.97%, 100=62.69% 01:25:01.801 lat (msec) : 250=3.91% 01:25:01.801 cpu : usr=36.34%, sys=0.68%, ctx=1056, majf=0, minf=9 01:25:01.801 IO depths : 1=0.7%, 2=1.6%, 4=7.9%, 8=76.7%, 16=13.1%, 32=0.0%, >=64=0.0% 01:25:01.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117193: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=235, BW=942KiB/s (965kB/s)(9464KiB/10042msec) 01:25:01.801 slat (usec): min=4, max=7059, avg=15.31, stdev=145.10 01:25:01.801 clat (msec): min=17, max=169, avg=67.77, stdev=24.18 01:25:01.801 lat (msec): min=17, max=169, avg=67.78, stdev=24.19 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 48], 01:25:01.801 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 71], 01:25:01.801 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 111], 01:25:01.801 | 99.00th=[ 133], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 01:25:01.801 | 99.99th=[ 169] 01:25:01.801 bw ( KiB/s): min= 560, max= 1320, per=4.25%, avg=940.00, stdev=207.62, samples=20 01:25:01.801 iops : min= 140, max= 330, avg=235.00, stdev=51.90, samples=20 01:25:01.801 lat (msec) : 20=0.68%, 50=28.83%, 100=59.72%, 250=10.78% 01:25:01.801 cpu : usr=34.14%, sys=0.61%, ctx=930, majf=0, minf=9 01:25:01.801 IO depths : 1=1.0%, 2=2.4%, 4=10.9%, 8=73.5%, 16=12.2%, 32=0.0%, >=64=0.0% 01:25:01.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.801 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.801 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.801 filename1: (groupid=0, jobs=1): err= 0: pid=117194: Mon Jul 22 11:22:05 2024 01:25:01.801 read: IOPS=201, BW=806KiB/s (825kB/s)(8064KiB/10005msec) 01:25:01.801 slat (usec): min=4, max=4026, avg=16.85, stdev=126.26 01:25:01.801 clat (msec): min=35, max=166, avg=79.27, stdev=21.54 01:25:01.801 lat (msec): min=35, max=166, avg=79.28, stdev=21.54 01:25:01.801 clat percentiles (msec): 01:25:01.801 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 63], 01:25:01.801 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 82], 01:25:01.801 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 113], 01:25:01.801 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 01:25:01.801 | 99.99th=[ 167] 01:25:01.801 bw ( KiB/s): min= 640, max= 1026, per=3.62%, avg=801.79, stdev=95.98, samples=19 01:25:01.801 iops : min= 160, max= 256, avg=200.42, stdev=23.93, samples=19 01:25:01.801 lat (msec) : 50=7.59%, 100=76.93%, 250=15.48% 01:25:01.801 cpu : usr=44.26%, sys=0.79%, ctx=1186, majf=0, minf=9 01:25:01.802 IO depths : 1=4.3%, 2=9.0%, 4=20.5%, 8=57.9%, 16=8.3%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117195: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=252, BW=1010KiB/s (1035kB/s)(9.89MiB/10019msec) 01:25:01.802 slat (usec): min=6, max=7079, avg=27.57, stdev=277.85 01:25:01.802 clat (msec): min=25, max=122, avg=63.13, stdev=18.33 01:25:01.802 lat (msec): min=25, max=122, avg=63.16, stdev=18.33 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 47], 01:25:01.802 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 65], 01:25:01.802 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 91], 95.00th=[ 100], 01:25:01.802 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 01:25:01.802 | 99.99th=[ 124] 01:25:01.802 bw ( KiB/s): min= 736, max= 1224, per=4.56%, avg=1008.40, stdev=126.88, samples=20 01:25:01.802 iops : min= 184, max= 306, avg=252.10, stdev=31.72, samples=20 01:25:01.802 lat (msec) : 50=27.46%, 100=68.12%, 250=4.43% 01:25:01.802 cpu : usr=42.45%, sys=0.85%, ctx=1163, majf=0, minf=9 01:25:01.802 IO depths : 1=0.9%, 2=2.0%, 4=8.2%, 8=76.4%, 16=12.4%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117196: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=224, BW=899KiB/s (921kB/s)(9008KiB/10019msec) 01:25:01.802 slat (usec): min=3, max=7967, avg=18.67, stdev=202.12 01:25:01.802 clat (msec): min=34, max=149, avg=71.06, stdev=20.74 01:25:01.802 lat (msec): min=34, max=149, avg=71.08, stdev=20.74 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 54], 01:25:01.802 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 73], 01:25:01.802 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 100], 95.00th=[ 108], 01:25:01.802 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 150], 99.95th=[ 150], 01:25:01.802 | 99.99th=[ 150] 01:25:01.802 bw ( KiB/s): min= 640, max= 1232, per=4.04%, avg=894.32, stdev=166.95, samples=19 01:25:01.802 iops : min= 160, max= 308, avg=223.58, stdev=41.74, samples=19 01:25:01.802 lat (msec) : 50=16.61%, 100=74.11%, 250=9.28% 01:25:01.802 cpu : usr=38.36%, sys=0.80%, ctx=1281, majf=0, minf=9 01:25:01.802 IO depths : 1=2.1%, 2=4.8%, 4=13.6%, 8=68.5%, 16=11.0%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117197: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=241, BW=965KiB/s (989kB/s)(9680KiB/10027msec) 01:25:01.802 slat (usec): min=6, max=8047, avg=37.61, stdev=416.81 01:25:01.802 clat (msec): min=26, max=141, avg=66.10, stdev=19.03 01:25:01.802 lat (msec): min=26, max=141, avg=66.14, stdev=19.03 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 01:25:01.802 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 70], 01:25:01.802 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 101], 01:25:01.802 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 142], 01:25:01.802 | 99.99th=[ 142] 01:25:01.802 bw ( KiB/s): min= 736, max= 1328, per=4.35%, avg=961.65, stdev=153.52, samples=20 01:25:01.802 iops : min= 184, max= 332, avg=240.40, stdev=38.39, samples=20 01:25:01.802 lat (msec) : 50=23.43%, 100=71.82%, 250=4.75% 01:25:01.802 cpu : usr=38.24%, sys=0.73%, ctx=1172, majf=0, minf=9 01:25:01.802 IO depths : 1=0.9%, 2=1.8%, 4=8.5%, 8=76.3%, 16=12.6%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117198: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=236, BW=948KiB/s (970kB/s)(9504KiB/10028msec) 01:25:01.802 slat (nsec): min=6646, max=63927, avg=12739.75, stdev=7970.43 01:25:01.802 clat (msec): min=32, max=133, avg=67.40, stdev=18.54 01:25:01.802 lat (msec): min=32, max=133, avg=67.41, stdev=18.54 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 50], 01:25:01.802 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 71], 01:25:01.802 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 101], 01:25:01.802 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 134], 99.95th=[ 134], 01:25:01.802 | 99.99th=[ 134] 01:25:01.802 bw ( KiB/s): min= 752, max= 1152, per=4.28%, avg=946.40, stdev=103.17, samples=20 01:25:01.802 iops : min= 188, max= 288, avg=236.60, stdev=25.79, samples=20 01:25:01.802 lat (msec) : 50=21.04%, 100=74.37%, 250=4.59% 01:25:01.802 cpu : usr=35.34%, sys=0.61%, ctx=925, majf=0, minf=9 01:25:01.802 IO depths : 1=0.8%, 2=2.0%, 4=9.3%, 8=74.9%, 16=12.9%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117199: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=265, BW=1061KiB/s (1087kB/s)(10.4MiB/10035msec) 01:25:01.802 slat (usec): min=3, max=4028, avg=13.26, stdev=78.24 01:25:01.802 clat (msec): min=3, max=128, avg=60.10, stdev=19.36 01:25:01.802 lat (msec): min=3, max=128, avg=60.12, stdev=19.36 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 6], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 01:25:01.802 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 01:25:01.802 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 95], 01:25:01.802 | 99.00th=[ 105], 99.50th=[ 114], 99.90th=[ 129], 99.95th=[ 129], 01:25:01.802 | 99.99th=[ 129] 01:25:01.802 bw ( KiB/s): min= 768, max= 1526, per=4.78%, avg=1057.75, stdev=200.39, samples=20 01:25:01.802 iops : min= 192, max= 381, avg=264.40, stdev=50.03, samples=20 01:25:01.802 lat (msec) : 4=0.60%, 10=1.69%, 20=0.11%, 50=31.25%, 100=64.09% 01:25:01.802 lat (msec) : 250=2.25% 01:25:01.802 cpu : usr=35.96%, sys=0.66%, ctx=984, majf=0, minf=9 01:25:01.802 IO depths : 1=0.6%, 2=1.3%, 4=7.7%, 8=77.1%, 16=13.3%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117200: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=201, BW=806KiB/s (825kB/s)(8064KiB/10005msec) 01:25:01.802 slat (usec): min=3, max=8037, avg=23.95, stdev=297.18 01:25:01.802 clat (msec): min=21, max=198, avg=79.18, stdev=21.99 01:25:01.802 lat (msec): min=22, max=198, avg=79.20, stdev=22.00 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 63], 01:25:01.802 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 01:25:01.802 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 115], 01:25:01.802 | 99.00th=[ 142], 99.50th=[ 180], 99.90th=[ 199], 99.95th=[ 199], 01:25:01.802 | 99.99th=[ 199] 01:25:01.802 bw ( KiB/s): min= 640, max= 1150, per=3.65%, avg=807.89, stdev=138.83, samples=19 01:25:01.802 iops : min= 160, max= 287, avg=201.95, stdev=34.64, samples=19 01:25:01.802 lat (msec) : 50=7.09%, 100=78.57%, 250=14.34% 01:25:01.802 cpu : usr=35.34%, sys=0.67%, ctx=943, majf=0, minf=9 01:25:01.802 IO depths : 1=2.5%, 2=5.7%, 4=16.2%, 8=65.1%, 16=10.5%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117201: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=224, BW=897KiB/s (919kB/s)(9008KiB/10039msec) 01:25:01.802 slat (usec): min=5, max=4707, avg=17.58, stdev=151.58 01:25:01.802 clat (msec): min=32, max=146, avg=71.21, stdev=20.91 01:25:01.802 lat (msec): min=32, max=146, avg=71.23, stdev=20.92 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 53], 01:25:01.802 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 75], 01:25:01.802 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 108], 01:25:01.802 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 01:25:01.802 | 99.99th=[ 148] 01:25:01.802 bw ( KiB/s): min= 640, max= 1248, per=4.04%, avg=894.40, stdev=143.71, samples=20 01:25:01.802 iops : min= 160, max= 312, avg=223.60, stdev=35.93, samples=20 01:25:01.802 lat (msec) : 50=17.45%, 100=73.22%, 250=9.33% 01:25:01.802 cpu : usr=37.86%, sys=0.74%, ctx=1234, majf=0, minf=9 01:25:01.802 IO depths : 1=1.8%, 2=4.4%, 4=14.6%, 8=67.8%, 16=11.5%, 32=0.0%, >=64=0.0% 01:25:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.802 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.802 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.802 filename2: (groupid=0, jobs=1): err= 0: pid=117202: Mon Jul 22 11:22:05 2024 01:25:01.802 read: IOPS=212, BW=850KiB/s (870kB/s)(8504KiB/10005msec) 01:25:01.802 slat (usec): min=3, max=8017, avg=18.20, stdev=194.75 01:25:01.802 clat (msec): min=5, max=144, avg=75.17, stdev=20.73 01:25:01.802 lat (msec): min=5, max=144, avg=75.19, stdev=20.74 01:25:01.802 clat percentiles (msec): 01:25:01.802 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 61], 01:25:01.802 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 01:25:01.802 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 112], 01:25:01.802 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 01:25:01.802 | 99.99th=[ 144] 01:25:01.802 bw ( KiB/s): min= 640, max= 1152, per=3.80%, avg=841.26, stdev=123.77, samples=19 01:25:01.802 iops : min= 160, max= 288, avg=210.32, stdev=30.94, samples=19 01:25:01.803 lat (msec) : 10=0.75%, 50=8.23%, 100=77.94%, 250=13.08% 01:25:01.803 cpu : usr=42.32%, sys=0.85%, ctx=1401, majf=0, minf=9 01:25:01.803 IO depths : 1=2.6%, 2=5.6%, 4=14.7%, 8=66.1%, 16=11.0%, 32=0.0%, >=64=0.0% 01:25:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.803 complete : 0=0.0%, 4=91.6%, 8=3.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:01.803 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:01.803 latency : target=0, window=0, percentile=100.00%, depth=16 01:25:01.803 01:25:01.803 Run status group 0 (all jobs): 01:25:01.803 READ: bw=21.6MiB/s (22.6MB/s), 806KiB/s-1070KiB/s (825kB/s-1095kB/s), io=217MiB (228MB), run=10005-10060msec 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 bdev_null0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 [2024-07-22 11:22:05.388394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 bdev_null1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:25:01.803 { 01:25:01.803 "params": { 01:25:01.803 "name": "Nvme$subsystem", 01:25:01.803 "trtype": "$TEST_TRANSPORT", 01:25:01.803 "traddr": "$NVMF_FIRST_TARGET_IP", 01:25:01.803 "adrfam": "ipv4", 01:25:01.803 "trsvcid": "$NVMF_PORT", 01:25:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:25:01.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:25:01.803 "hdgst": ${hdgst:-false}, 01:25:01.803 "ddgst": ${ddgst:-false} 01:25:01.803 }, 01:25:01.803 "method": "bdev_nvme_attach_controller" 01:25:01.803 } 01:25:01.803 EOF 01:25:01.803 )") 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:25:01.803 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:25:01.804 { 01:25:01.804 "params": { 01:25:01.804 "name": "Nvme$subsystem", 01:25:01.804 "trtype": "$TEST_TRANSPORT", 01:25:01.804 "traddr": "$NVMF_FIRST_TARGET_IP", 01:25:01.804 "adrfam": "ipv4", 01:25:01.804 "trsvcid": "$NVMF_PORT", 01:25:01.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:25:01.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:25:01.804 "hdgst": ${hdgst:-false}, 01:25:01.804 "ddgst": ${ddgst:-false} 01:25:01.804 }, 01:25:01.804 "method": "bdev_nvme_attach_controller" 01:25:01.804 } 01:25:01.804 EOF 01:25:01.804 )") 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:25:01.804 "params": { 01:25:01.804 "name": "Nvme0", 01:25:01.804 "trtype": "tcp", 01:25:01.804 "traddr": "10.0.0.2", 01:25:01.804 "adrfam": "ipv4", 01:25:01.804 "trsvcid": "4420", 01:25:01.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:25:01.804 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:25:01.804 "hdgst": false, 01:25:01.804 "ddgst": false 01:25:01.804 }, 01:25:01.804 "method": "bdev_nvme_attach_controller" 01:25:01.804 },{ 01:25:01.804 "params": { 01:25:01.804 "name": "Nvme1", 01:25:01.804 "trtype": "tcp", 01:25:01.804 "traddr": "10.0.0.2", 01:25:01.804 "adrfam": "ipv4", 01:25:01.804 "trsvcid": "4420", 01:25:01.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:25:01.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:25:01.804 "hdgst": false, 01:25:01.804 "ddgst": false 01:25:01.804 }, 01:25:01.804 "method": "bdev_nvme_attach_controller" 01:25:01.804 }' 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:25:01.804 11:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:25:01.804 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:25:01.804 ... 01:25:01.804 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:25:01.804 ... 01:25:01.804 fio-3.35 01:25:01.804 Starting 4 threads 01:25:07.079 01:25:07.079 filename0: (groupid=0, jobs=1): err= 0: pid=117329: Mon Jul 22 11:22:11 2024 01:25:07.079 read: IOPS=1890, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5003msec) 01:25:07.079 slat (nsec): min=6190, max=98185, avg=13047.17, stdev=8427.90 01:25:07.079 clat (usec): min=2410, max=9293, avg=4184.39, stdev=494.49 01:25:07.079 lat (usec): min=2417, max=9309, avg=4197.44, stdev=494.73 01:25:07.079 clat percentiles (usec): 01:25:07.079 | 1.00th=[ 3294], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 01:25:07.079 | 30.00th=[ 3851], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4359], 01:25:07.079 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5014], 01:25:07.079 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 7046], 99.95th=[ 9241], 01:25:07.079 | 99.99th=[ 9241] 01:25:07.079 bw ( KiB/s): min=14208, max=16976, per=25.04%, avg=15146.67, stdev=942.58, samples=9 01:25:07.080 iops : min= 1776, max= 2122, avg=1893.33, stdev=117.82, samples=9 01:25:07.080 lat (msec) : 4=36.32%, 10=63.68% 01:25:07.080 cpu : usr=95.20%, sys=3.60%, ctx=13, majf=0, minf=0 01:25:07.080 IO depths : 1=4.8%, 2=10.2%, 4=64.8%, 8=20.2%, 16=0.0%, 32=0.0%, >=64=0.0% 01:25:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 issued rwts: total=9456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:07.080 latency : target=0, window=0, percentile=100.00%, depth=8 01:25:07.080 filename0: (groupid=0, jobs=1): err= 0: pid=117330: Mon Jul 22 11:22:11 2024 01:25:07.080 read: IOPS=1887, BW=14.7MiB/s (15.5MB/s)(73.8MiB/5001msec) 01:25:07.080 slat (usec): min=6, max=102, avg=16.65, stdev= 9.44 01:25:07.080 clat (usec): min=1812, max=10792, avg=4152.10, stdev=523.17 01:25:07.080 lat (usec): min=1840, max=10812, avg=4168.75, stdev=523.17 01:25:07.080 clat percentiles (usec): 01:25:07.080 | 1.00th=[ 3392], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3687], 01:25:07.080 | 30.00th=[ 3851], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4293], 01:25:07.080 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4817], 01:25:07.080 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 9634], 99.95th=[10683], 01:25:07.080 | 99.99th=[10814] 01:25:07.080 bw ( KiB/s): min=14208, max=16768, per=25.00%, avg=15118.22, stdev=897.02, samples=9 01:25:07.080 iops : min= 1776, max= 2096, avg=1889.78, stdev=112.13, samples=9 01:25:07.080 lat (msec) : 2=0.08%, 4=35.74%, 10=64.08%, 20=0.10% 01:25:07.080 cpu : usr=95.50%, sys=3.04%, ctx=34, majf=0, minf=10 01:25:07.080 IO depths : 1=11.2%, 2=25.0%, 4=50.0%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 01:25:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 issued rwts: total=9440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:07.080 latency : target=0, window=0, percentile=100.00%, depth=8 01:25:07.080 filename1: (groupid=0, jobs=1): err= 0: pid=117331: Mon Jul 22 11:22:11 2024 01:25:07.080 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5001msec) 01:25:07.080 slat (nsec): min=5850, max=74253, avg=9069.24, stdev=5389.25 01:25:07.080 clat (usec): min=2481, max=9407, avg=4179.81, stdev=441.69 01:25:07.080 lat (usec): min=2496, max=9414, avg=4188.88, stdev=442.26 01:25:07.080 clat percentiles (usec): 01:25:07.080 | 1.00th=[ 3490], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3720], 01:25:07.080 | 30.00th=[ 3884], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4359], 01:25:07.080 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 01:25:07.080 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5735], 99.95th=[ 9241], 01:25:07.080 | 99.99th=[ 9372] 01:25:07.080 bw ( KiB/s): min=14336, max=17024, per=25.07%, avg=15164.00, stdev=931.24, samples=9 01:25:07.080 iops : min= 1792, max= 2128, avg=1895.44, stdev=116.46, samples=9 01:25:07.080 lat (msec) : 4=34.72%, 10=65.28% 01:25:07.080 cpu : usr=95.40%, sys=3.46%, ctx=11, majf=0, minf=9 01:25:07.080 IO depths : 1=10.4%, 2=25.0%, 4=50.0%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:25:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 issued rwts: total=9464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:07.080 latency : target=0, window=0, percentile=100.00%, depth=8 01:25:07.080 filename1: (groupid=0, jobs=1): err= 0: pid=117332: Mon Jul 22 11:22:11 2024 01:25:07.080 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5002msec) 01:25:07.080 slat (usec): min=5, max=100, avg=15.73, stdev= 9.10 01:25:07.080 clat (usec): min=1852, max=9296, avg=4148.29, stdev=531.71 01:25:07.080 lat (usec): min=1859, max=9320, avg=4164.02, stdev=531.76 01:25:07.080 clat percentiles (usec): 01:25:07.080 | 1.00th=[ 3097], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3687], 01:25:07.080 | 30.00th=[ 3851], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4293], 01:25:07.080 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4883], 01:25:07.080 | 99.00th=[ 5800], 99.50th=[ 6587], 99.90th=[ 8848], 99.95th=[ 9241], 01:25:07.080 | 99.99th=[ 9241] 01:25:07.080 bw ( KiB/s): min=14336, max=17024, per=25.07%, avg=15160.89, stdev=943.02, samples=9 01:25:07.080 iops : min= 1792, max= 2128, avg=1895.11, stdev=117.88, samples=9 01:25:07.080 lat (msec) : 2=0.10%, 4=36.58%, 10=63.32% 01:25:07.080 cpu : usr=96.10%, sys=2.70%, ctx=7, majf=0, minf=9 01:25:07.080 IO depths : 1=9.5%, 2=25.0%, 4=50.0%, 8=15.5%, 16=0.0%, 32=0.0%, >=64=0.0% 01:25:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:07.080 issued rwts: total=9464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:07.080 latency : target=0, window=0, percentile=100.00%, depth=8 01:25:07.080 01:25:07.080 Run status group 0 (all jobs): 01:25:07.080 READ: bw=59.1MiB/s (61.9MB/s), 14.7MiB/s-14.8MiB/s (15.5MB/s-15.5MB/s), io=296MiB (310MB), run=5001-5003msec 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 ************************************ 01:25:07.080 END TEST fio_dif_rand_params 01:25:07.080 ************************************ 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 01:25:07.080 real 0m23.592s 01:25:07.080 user 2m6.873s 01:25:07.080 sys 0m3.985s 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 11:22:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:25:07.080 11:22:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:25:07.080 11:22:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:25:07.080 11:22:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 ************************************ 01:25:07.080 START TEST fio_dif_digest 01:25:07.080 ************************************ 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 bdev_null0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:25:07.080 [2024-07-22 11:22:11.593516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 01:25:07.080 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:25:07.081 { 01:25:07.081 "params": { 01:25:07.081 "name": "Nvme$subsystem", 01:25:07.081 "trtype": "$TEST_TRANSPORT", 01:25:07.081 "traddr": "$NVMF_FIRST_TARGET_IP", 01:25:07.081 "adrfam": "ipv4", 01:25:07.081 "trsvcid": "$NVMF_PORT", 01:25:07.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:25:07.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:25:07.081 "hdgst": ${hdgst:-false}, 01:25:07.081 "ddgst": ${ddgst:-false} 01:25:07.081 }, 01:25:07.081 "method": "bdev_nvme_attach_controller" 01:25:07.081 } 01:25:07.081 EOF 01:25:07.081 )") 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:25:07.081 "params": { 01:25:07.081 "name": "Nvme0", 01:25:07.081 "trtype": "tcp", 01:25:07.081 "traddr": "10.0.0.2", 01:25:07.081 "adrfam": "ipv4", 01:25:07.081 "trsvcid": "4420", 01:25:07.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:25:07.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:25:07.081 "hdgst": true, 01:25:07.081 "ddgst": true 01:25:07.081 }, 01:25:07.081 "method": "bdev_nvme_attach_controller" 01:25:07.081 }' 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:25:07.081 11:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:25:07.081 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:25:07.081 ... 01:25:07.081 fio-3.35 01:25:07.081 Starting 3 threads 01:25:19.294 01:25:19.294 filename0: (groupid=0, jobs=1): err= 0: pid=117439: Mon Jul 22 11:22:22 2024 01:25:19.294 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(295MiB/10002msec) 01:25:19.294 slat (nsec): min=6196, max=66964, avg=15762.97, stdev=7182.76 01:25:19.294 clat (usec): min=8344, max=93734, avg=12703.70, stdev=8784.82 01:25:19.294 lat (usec): min=8363, max=93745, avg=12719.46, stdev=8784.71 01:25:19.294 clat percentiles (usec): 01:25:19.294 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 01:25:19.294 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 01:25:19.294 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[13042], 01:25:19.294 | 99.00th=[51643], 99.50th=[52167], 99.90th=[54264], 99.95th=[93848], 01:25:19.294 | 99.99th=[93848] 01:25:19.294 bw ( KiB/s): min=24064, max=36096, per=33.56%, avg=30517.89, stdev=4101.84, samples=19 01:25:19.294 iops : min= 188, max= 282, avg=238.42, stdev=32.05, samples=19 01:25:19.294 lat (msec) : 10=14.72%, 20=80.66%, 50=0.08%, 100=4.54% 01:25:19.294 cpu : usr=93.48%, sys=4.96%, ctx=13, majf=0, minf=0 01:25:19.294 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:25:19.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:19.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:19.294 issued rwts: total=2358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:19.294 latency : target=0, window=0, percentile=100.00%, depth=3 01:25:19.294 filename0: (groupid=0, jobs=1): err= 0: pid=117440: Mon Jul 22 11:22:22 2024 01:25:19.294 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(278MiB/10002msec) 01:25:19.294 slat (nsec): min=9500, max=73645, avg=18705.03, stdev=7246.67 01:25:19.294 clat (usec): min=7056, max=18042, avg=13494.52, stdev=2447.83 01:25:19.294 lat (usec): min=7078, max=18060, avg=13513.23, stdev=2448.53 01:25:19.294 clat percentiles (usec): 01:25:19.294 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[10421], 01:25:19.294 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 01:25:19.294 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 01:25:19.294 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 01:25:19.294 | 99.99th=[17957] 01:25:19.294 bw ( KiB/s): min=25344, max=32512, per=31.10%, avg=28278.42, stdev=2226.46, samples=19 01:25:19.294 iops : min= 198, max= 254, avg=220.89, stdev=17.42, samples=19 01:25:19.294 lat (msec) : 10=18.74%, 20=81.26% 01:25:19.294 cpu : usr=94.67%, sys=3.85%, ctx=19, majf=0, minf=0 01:25:19.294 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:25:19.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:19.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:19.294 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:19.294 latency : target=0, window=0, percentile=100.00%, depth=3 01:25:19.294 filename0: (groupid=0, jobs=1): err= 0: pid=117441: Mon Jul 22 11:22:22 2024 01:25:19.294 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(316MiB/10003msec) 01:25:19.294 slat (nsec): min=6371, max=87777, avg=16560.32, stdev=7225.54 01:25:19.294 clat (usec): min=6287, max=16322, avg=11848.19, stdev=2229.56 01:25:19.294 lat (usec): min=6298, max=16329, avg=11864.75, stdev=2229.13 01:25:19.294 clat percentiles (usec): 01:25:19.294 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 9372], 01:25:19.294 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 01:25:19.294 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14091], 95.00th=[14484], 01:25:19.294 | 99.00th=[15270], 99.50th=[15533], 99.90th=[15926], 99.95th=[15926], 01:25:19.294 | 99.99th=[16319] 01:25:19.294 bw ( KiB/s): min=28672, max=36864, per=35.43%, avg=32218.74, stdev=2421.80, samples=19 01:25:19.294 iops : min= 224, max= 288, avg=251.68, stdev=18.94, samples=19 01:25:19.294 lat (msec) : 10=21.52%, 20=78.48% 01:25:19.294 cpu : usr=92.47%, sys=5.39%, ctx=14, majf=0, minf=0 01:25:19.294 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:25:19.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:19.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:25:19.294 issued rwts: total=2528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:25:19.294 latency : target=0, window=0, percentile=100.00%, depth=3 01:25:19.294 01:25:19.294 Run status group 0 (all jobs): 01:25:19.294 READ: bw=88.8MiB/s (93.1MB/s), 27.7MiB/s-31.6MiB/s (29.1MB/s-33.1MB/s), io=888MiB (931MB), run=10002-10003msec 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:25:19.294 ************************************ 01:25:19.294 END TEST fio_dif_digest 01:25:19.294 ************************************ 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:19.294 01:25:19.294 real 0m10.933s 01:25:19.294 user 0m28.628s 01:25:19.294 sys 0m1.697s 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 01:25:19.294 11:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 01:25:19.294 11:22:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:25:19.294 11:22:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:25:19.294 rmmod nvme_tcp 01:25:19.294 rmmod nvme_fabrics 01:25:19.294 rmmod nvme_keyring 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 116684 ']' 01:25:19.294 11:22:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 116684 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 116684 ']' 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 116684 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@953 -- # uname 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116684 01:25:19.294 killing process with pid 116684 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:25:19.294 11:22:22 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116684' 01:25:19.295 11:22:22 nvmf_dif -- common/autotest_common.sh@967 -- # kill 116684 01:25:19.295 11:22:22 nvmf_dif -- common/autotest_common.sh@972 -- # wait 116684 01:25:19.295 11:22:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:25:19.295 11:22:22 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:25:19.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:25:19.295 Waiting for block devices as requested 01:25:19.295 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:25:19.295 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:25:19.295 11:22:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:25:19.295 11:22:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:25:19.295 11:22:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:25:19.295 11:22:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 01:25:19.295 11:22:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:25:19.295 11:22:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:25:19.295 11:22:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:25:19.295 11:22:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:25:19.295 ************************************ 01:25:19.295 END TEST nvmf_dif 01:25:19.295 ************************************ 01:25:19.295 01:25:19.295 real 0m59.807s 01:25:19.295 user 3m50.335s 01:25:19.295 sys 0m14.490s 01:25:19.295 11:22:23 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 01:25:19.295 11:22:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:25:19.295 11:22:23 -- common/autotest_common.sh@1142 -- # return 0 01:25:19.295 11:22:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:25:19.295 11:22:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:25:19.295 11:22:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:25:19.295 11:22:23 -- common/autotest_common.sh@10 -- # set +x 01:25:19.295 ************************************ 01:25:19.295 START TEST nvmf_abort_qd_sizes 01:25:19.295 ************************************ 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:25:19.295 * Looking for test storage... 01:25:19.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 01:25:19.295 Cannot find device "nvmf_tgt_br" 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 01:25:19.295 Cannot find device "nvmf_tgt_br2" 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 01:25:19.295 Cannot find device "nvmf_tgt_br" 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 01:25:19.295 Cannot find device "nvmf_tgt_br2" 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:25:19.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:25:19.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:25:19.295 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 01:25:19.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:25:19.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 01:25:19.296 01:25:19.296 --- 10.0.0.2 ping statistics --- 01:25:19.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:19.296 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 01:25:19.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:25:19.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 01:25:19.296 01:25:19.296 --- 10.0.0.3 ping statistics --- 01:25:19.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:19.296 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:25:19.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:25:19.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:25:19.296 01:25:19.296 --- 10.0.0.1 ping statistics --- 01:25:19.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:19.296 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:25:19.296 11:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:25:19.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:25:19.554 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:25:19.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=118024 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 118024 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 118024 ']' 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 01:25:19.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 01:25:19.813 11:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:25:19.813 [2024-07-22 11:22:24.940859] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:25:19.813 [2024-07-22 11:22:24.941503] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:20.071 [2024-07-22 11:22:25.085083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:25:20.071 [2024-07-22 11:22:25.223489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:25:20.071 [2024-07-22 11:22:25.223566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:25:20.071 [2024-07-22 11:22:25.223582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:20.071 [2024-07-22 11:22:25.223593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:20.071 [2024-07-22 11:22:25.223603] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:25:20.071 [2024-07-22 11:22:25.223781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:25:20.071 [2024-07-22 11:22:25.225067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 01:25:20.071 [2024-07-22 11:22:25.225159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 01:25:20.071 [2024-07-22 11:22:25.225171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:25:21.006 11:22:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:25:21.006 11:22:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 01:25:21.006 11:22:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:25:21.006 11:22:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 01:25:21.006 11:22:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 01:25:21.006 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 01:25:21.007 11:22:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:25:21.007 ************************************ 01:25:21.007 START TEST spdk_target_abort 01:25:21.007 ************************************ 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:21.007 spdk_targetn1 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:21.007 [2024-07-22 11:22:26.156816] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:21.007 [2024-07-22 11:22:26.185095] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:25:21.007 11:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:24.289 Initializing NVMe Controllers 01:25:24.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:25:24.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:25:24.289 Initialization complete. Launching workers. 01:25:24.289 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10170, failed: 0 01:25:24.289 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1108, failed to submit 9062 01:25:24.289 success 745, unsuccess 363, failed 0 01:25:24.289 11:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:25:24.289 11:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:28.475 Initializing NVMe Controllers 01:25:28.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:25:28.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:25:28.475 Initialization complete. Launching workers. 01:25:28.475 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6007, failed: 0 01:25:28.475 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 4780 01:25:28.475 success 268, unsuccess 959, failed 0 01:25:28.475 11:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:25:28.475 11:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:31.006 Initializing NVMe Controllers 01:25:31.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:25:31.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:25:31.006 Initialization complete. Launching workers. 01:25:31.006 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29977, failed: 0 01:25:31.006 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2577, failed to submit 27400 01:25:31.006 success 339, unsuccess 2238, failed 0 01:25:31.006 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:25:31.006 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:31.006 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:31.006 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:31.006 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:25:31.006 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:31.006 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 118024 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 118024 ']' 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 118024 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118024 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:25:31.263 killing process with pid 118024 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118024' 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 118024 01:25:31.263 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 118024 01:25:31.521 01:25:31.521 real 0m10.546s 01:25:31.521 user 0m43.177s 01:25:31.521 sys 0m1.725s 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:31.521 ************************************ 01:25:31.521 END TEST spdk_target_abort 01:25:31.521 ************************************ 01:25:31.521 11:22:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 01:25:31.521 11:22:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:25:31.521 11:22:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:25:31.521 11:22:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 01:25:31.521 11:22:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:25:31.521 ************************************ 01:25:31.521 START TEST kernel_target_abort 01:25:31.521 ************************************ 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:25:31.521 11:22:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:25:32.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:25:32.086 Waiting for block devices as requested 01:25:32.086 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:25:32.086 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:25:32.086 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:25:32.345 No valid GPT data, bailing 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:25:32.345 No valid GPT data, bailing 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:25:32.345 No valid GPT data, bailing 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:25:32.345 No valid GPT data, bailing 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 01:25:32.345 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 --hostid=8977fc08-3b30-49e8-886e-3a1f0545f479 -a 10.0.0.1 -t tcp -s 4420 01:25:32.604 01:25:32.604 Discovery Log Number of Records 2, Generation counter 2 01:25:32.604 =====Discovery Log Entry 0====== 01:25:32.604 trtype: tcp 01:25:32.604 adrfam: ipv4 01:25:32.604 subtype: current discovery subsystem 01:25:32.604 treq: not specified, sq flow control disable supported 01:25:32.604 portid: 1 01:25:32.604 trsvcid: 4420 01:25:32.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:25:32.604 traddr: 10.0.0.1 01:25:32.604 eflags: none 01:25:32.604 sectype: none 01:25:32.604 =====Discovery Log Entry 1====== 01:25:32.604 trtype: tcp 01:25:32.604 adrfam: ipv4 01:25:32.604 subtype: nvme subsystem 01:25:32.604 treq: not specified, sq flow control disable supported 01:25:32.604 portid: 1 01:25:32.604 trsvcid: 4420 01:25:32.604 subnqn: nqn.2016-06.io.spdk:testnqn 01:25:32.604 traddr: 10.0.0.1 01:25:32.604 eflags: none 01:25:32.604 sectype: none 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:25:32.604 11:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:35.910 Initializing NVMe Controllers 01:25:35.910 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:25:35.910 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:25:35.910 Initialization complete. Launching workers. 01:25:35.910 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33449, failed: 0 01:25:35.910 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33449, failed to submit 0 01:25:35.910 success 0, unsuccess 33449, failed 0 01:25:35.910 11:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:25:35.910 11:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:39.189 Initializing NVMe Controllers 01:25:39.190 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:25:39.190 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:25:39.190 Initialization complete. Launching workers. 01:25:39.190 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65725, failed: 0 01:25:39.190 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26404, failed to submit 39321 01:25:39.190 success 0, unsuccess 26404, failed 0 01:25:39.190 11:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:25:39.190 11:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:25:42.470 Initializing NVMe Controllers 01:25:42.470 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:25:42.470 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:25:42.470 Initialization complete. Launching workers. 01:25:42.470 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71698, failed: 0 01:25:42.470 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17894, failed to submit 53804 01:25:42.470 success 0, unsuccess 17894, failed 0 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:25:42.470 11:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:25:42.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:25:43.662 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:25:43.662 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:25:43.662 01:25:43.662 real 0m12.060s 01:25:43.662 user 0m5.530s 01:25:43.662 sys 0m3.723s 01:25:43.662 11:22:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 01:25:43.662 11:22:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:25:43.662 ************************************ 01:25:43.662 END TEST kernel_target_abort 01:25:43.662 ************************************ 01:25:43.662 11:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 01:25:43.662 11:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:25:43.662 11:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:25:43.663 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 01:25:43.663 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 01:25:43.663 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:25:43.663 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 01:25:43.663 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 01:25:43.663 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:25:43.663 rmmod nvme_tcp 01:25:43.663 rmmod nvme_fabrics 01:25:43.663 rmmod nvme_keyring 01:25:43.663 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 118024 ']' 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 118024 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 118024 ']' 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 118024 01:25:43.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (118024) - No such process 01:25:43.920 Process with pid 118024 is not found 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 118024 is not found' 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:25:43.920 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:25:44.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:25:44.179 Waiting for block devices as requested 01:25:44.179 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:25:44.179 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 01:25:44.438 01:25:44.438 real 0m25.942s 01:25:44.438 user 0m49.896s 01:25:44.438 sys 0m6.836s 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 01:25:44.438 ************************************ 01:25:44.438 END TEST nvmf_abort_qd_sizes 01:25:44.438 ************************************ 01:25:44.438 11:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:25:44.438 11:22:49 -- common/autotest_common.sh@1142 -- # return 0 01:25:44.438 11:22:49 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:25:44.438 11:22:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:25:44.438 11:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:25:44.438 11:22:49 -- common/autotest_common.sh@10 -- # set +x 01:25:44.438 ************************************ 01:25:44.438 START TEST keyring_file 01:25:44.438 ************************************ 01:25:44.438 11:22:49 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:25:44.438 * Looking for test storage... 01:25:44.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:25:44.438 11:22:49 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:25:44.438 11:22:49 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:25:44.438 11:22:49 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:25:44.438 11:22:49 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:25:44.438 11:22:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:44.438 11:22:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:44.438 11:22:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:44.438 11:22:49 keyring_file -- paths/export.sh@5 -- # export PATH 01:25:44.438 11:22:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@47 -- # : 0 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:25:44.438 11:22:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:25:44.438 11:22:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:25:44.438 11:22:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:25:44.438 11:22:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:25:44.438 11:22:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:25:44.438 11:22:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@17 -- # name=key0 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@17 -- # digest=0 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@18 -- # mktemp 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fOIl6M4CZK 01:25:44.438 11:22:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:25:44.438 11:22:49 keyring_file -- nvmf/common.sh@705 -- # python - 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fOIl6M4CZK 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fOIl6M4CZK 01:25:44.696 11:22:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fOIl6M4CZK 01:25:44.696 11:22:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@17 -- # name=key1 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@17 -- # digest=0 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@18 -- # mktemp 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dInkmxsitq 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:25:44.696 11:22:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:25:44.696 11:22:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:25:44.696 11:22:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:25:44.696 11:22:49 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:25:44.696 11:22:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:25:44.696 11:22:49 keyring_file -- nvmf/common.sh@705 -- # python - 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dInkmxsitq 01:25:44.696 11:22:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dInkmxsitq 01:25:44.696 11:22:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dInkmxsitq 01:25:44.696 11:22:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=118896 01:25:44.696 11:22:49 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:25:44.696 11:22:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 118896 01:25:44.696 11:22:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 118896 ']' 01:25:44.696 11:22:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:44.696 11:22:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:25:44.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:44.696 11:22:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:44.696 11:22:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:25:44.696 11:22:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:25:44.696 [2024-07-22 11:22:49.801774] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:25:44.696 [2024-07-22 11:22:49.801872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118896 ] 01:25:44.954 [2024-07-22 11:22:49.941598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:44.954 [2024-07-22 11:22:50.021550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:25:45.887 11:22:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:25:45.887 [2024-07-22 11:22:50.792468] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:45.887 null0 01:25:45.887 [2024-07-22 11:22:50.824489] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:25:45.887 [2024-07-22 11:22:50.824679] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:25:45.887 [2024-07-22 11:22:50.832487] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:25:45.887 11:22:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 01:25:45.887 11:22:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:25:45.888 [2024-07-22 11:22:50.844480] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:25:45.888 request: 01:25:45.888 2024/07/22 11:22:50 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 01:25:45.888 { 01:25:45.888 "method": "nvmf_subsystem_add_listener", 01:25:45.888 "params": { 01:25:45.888 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:25:45.888 "secure_channel": false, 01:25:45.888 "listen_address": { 01:25:45.888 "trtype": "tcp", 01:25:45.888 "traddr": "127.0.0.1", 01:25:45.888 "trsvcid": "4420" 01:25:45.888 } 01:25:45.888 } 01:25:45.888 } 01:25:45.888 Got JSON-RPC error response 01:25:45.888 GoRPCClient: error on JSON-RPC call 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:25:45.888 11:22:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=118931 01:25:45.888 11:22:50 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:25:45.888 11:22:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 118931 /var/tmp/bperf.sock 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 118931 ']' 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:25:45.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:25:45.888 11:22:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:25:45.888 [2024-07-22 11:22:50.910507] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:25:45.888 [2024-07-22 11:22:50.910758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118931 ] 01:25:45.888 [2024-07-22 11:22:51.051417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:46.146 [2024-07-22 11:22:51.127385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:25:46.711 11:22:51 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:25:46.711 11:22:51 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:25:46.711 11:22:51 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:46.711 11:22:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:46.969 11:22:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dInkmxsitq 01:25:46.969 11:22:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dInkmxsitq 01:25:47.226 11:22:52 keyring_file -- keyring/file.sh@51 -- # get_key key0 01:25:47.226 11:22:52 keyring_file -- keyring/file.sh@51 -- # jq -r .path 01:25:47.226 11:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:47.226 11:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:47.226 11:22:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:47.484 11:22:52 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.fOIl6M4CZK == \/\t\m\p\/\t\m\p\.\f\O\I\l\6\M\4\C\Z\K ]] 01:25:47.484 11:22:52 keyring_file -- keyring/file.sh@52 -- # get_key key1 01:25:47.484 11:22:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:25:47.484 11:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:47.484 11:22:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:47.484 11:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:25:47.743 11:22:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dInkmxsitq == \/\t\m\p\/\t\m\p\.\d\I\n\k\m\x\s\i\t\q ]] 01:25:47.743 11:22:52 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 01:25:47.743 11:22:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:47.743 11:22:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:47.743 11:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:47.743 11:22:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:47.743 11:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:48.001 11:22:53 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 01:25:48.001 11:22:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 01:25:48.001 11:22:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:25:48.001 11:22:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:48.001 11:22:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:48.001 11:22:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:48.001 11:22:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:25:48.258 11:22:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:25:48.258 11:22:53 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:48.258 11:22:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:48.517 [2024-07-22 11:22:53.630876] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:48.517 nvme0n1 01:25:48.517 11:22:53 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 01:25:48.517 11:22:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:48.517 11:22:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:48.517 11:22:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:48.517 11:22:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:48.517 11:22:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:48.774 11:22:53 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 01:25:48.774 11:22:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 01:25:48.774 11:22:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:48.774 11:22:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:25:48.774 11:22:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:48.774 11:22:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:48.774 11:22:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:25:49.031 11:22:54 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 01:25:49.031 11:22:54 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:25:49.307 Running I/O for 1 seconds... 01:25:50.243 01:25:50.243 Latency(us) 01:25:50.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:50.243 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:25:50.243 nvme0n1 : 1.00 13611.35 53.17 0.00 0.00 9375.33 4766.25 19779.96 01:25:50.243 =================================================================================================================== 01:25:50.243 Total : 13611.35 53.17 0.00 0.00 9375.33 4766.25 19779.96 01:25:50.243 0 01:25:50.243 11:22:55 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:25:50.243 11:22:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:25:50.501 11:22:55 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 01:25:50.502 11:22:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:50.502 11:22:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:50.502 11:22:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:50.502 11:22:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:50.502 11:22:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:50.760 11:22:55 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 01:25:50.760 11:22:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 01:25:50.760 11:22:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:25:50.760 11:22:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:50.760 11:22:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:50.760 11:22:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:25:50.760 11:22:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:51.020 11:22:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:25:51.020 11:22:56 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:25:51.020 11:22:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:25:51.020 11:22:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:25:51.020 11:22:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:25:51.020 11:22:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:51.020 11:22:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:25:51.020 11:22:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:51.020 11:22:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:25:51.020 11:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:25:51.278 [2024-07-22 11:22:56.338426] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:25:51.278 [2024-07-22 11:22:56.338986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2263660 (107): Transport endpoint is not connected 01:25:51.278 [2024-07-22 11:22:56.339959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2263660 (9): Bad file descriptor 01:25:51.278 [2024-07-22 11:22:56.340955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:25:51.278 [2024-07-22 11:22:56.341013] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:25:51.278 [2024-07-22 11:22:56.341024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:25:51.278 2024/07/22 11:22:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:25:51.278 request: 01:25:51.278 { 01:25:51.278 "method": "bdev_nvme_attach_controller", 01:25:51.278 "params": { 01:25:51.278 "name": "nvme0", 01:25:51.278 "trtype": "tcp", 01:25:51.278 "traddr": "127.0.0.1", 01:25:51.278 "adrfam": "ipv4", 01:25:51.278 "trsvcid": "4420", 01:25:51.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:25:51.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:25:51.278 "prchk_reftag": false, 01:25:51.278 "prchk_guard": false, 01:25:51.278 "hdgst": false, 01:25:51.278 "ddgst": false, 01:25:51.278 "psk": "key1" 01:25:51.278 } 01:25:51.278 } 01:25:51.278 Got JSON-RPC error response 01:25:51.278 GoRPCClient: error on JSON-RPC call 01:25:51.278 11:22:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:25:51.278 11:22:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:25:51.278 11:22:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:25:51.278 11:22:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:25:51.278 11:22:56 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 01:25:51.278 11:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:51.278 11:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:51.278 11:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:51.278 11:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:51.278 11:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:51.536 11:22:56 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 01:25:51.536 11:22:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 01:25:51.536 11:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:51.536 11:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:25:51.536 11:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:51.536 11:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:51.536 11:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:25:51.794 11:22:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:25:51.794 11:22:56 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 01:25:51.794 11:22:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:25:52.052 11:22:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 01:25:52.052 11:22:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:25:52.052 11:22:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 01:25:52.052 11:22:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:52.052 11:22:57 keyring_file -- keyring/file.sh@77 -- # jq length 01:25:52.618 11:22:57 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 01:25:52.618 11:22:57 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.fOIl6M4CZK 01:25:52.618 11:22:57 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:52.618 11:22:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:52.618 [2024-07-22 11:22:57.779400] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fOIl6M4CZK': 0100660 01:25:52.618 [2024-07-22 11:22:57.779447] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:25:52.618 2024/07/22 11:22:57 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.fOIl6M4CZK], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:25:52.618 request: 01:25:52.618 { 01:25:52.618 "method": "keyring_file_add_key", 01:25:52.618 "params": { 01:25:52.618 "name": "key0", 01:25:52.618 "path": "/tmp/tmp.fOIl6M4CZK" 01:25:52.618 } 01:25:52.618 } 01:25:52.618 Got JSON-RPC error response 01:25:52.618 GoRPCClient: error on JSON-RPC call 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:25:52.618 11:22:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:25:52.618 11:22:57 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.fOIl6M4CZK 01:25:52.618 11:22:57 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:52.618 11:22:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fOIl6M4CZK 01:25:52.876 11:22:58 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.fOIl6M4CZK 01:25:52.876 11:22:58 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 01:25:52.876 11:22:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:52.876 11:22:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:52.876 11:22:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:52.877 11:22:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:52.877 11:22:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:53.135 11:22:58 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 01:25:53.135 11:22:58 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:53.135 11:22:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 01:25:53.135 11:22:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:53.135 11:22:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:25:53.135 11:22:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:53.135 11:22:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:25:53.135 11:22:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:25:53.135 11:22:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:53.135 11:22:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:53.393 [2024-07-22 11:22:58.483545] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fOIl6M4CZK': No such file or directory 01:25:53.393 [2024-07-22 11:22:58.483582] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:25:53.393 [2024-07-22 11:22:58.483621] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:25:53.393 [2024-07-22 11:22:58.483629] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:25:53.393 [2024-07-22 11:22:58.483637] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:25:53.393 2024/07/22 11:22:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 01:25:53.393 request: 01:25:53.393 { 01:25:53.393 "method": "bdev_nvme_attach_controller", 01:25:53.393 "params": { 01:25:53.393 "name": "nvme0", 01:25:53.393 "trtype": "tcp", 01:25:53.393 "traddr": "127.0.0.1", 01:25:53.393 "adrfam": "ipv4", 01:25:53.393 "trsvcid": "4420", 01:25:53.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:25:53.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:25:53.393 "prchk_reftag": false, 01:25:53.393 "prchk_guard": false, 01:25:53.393 "hdgst": false, 01:25:53.393 "ddgst": false, 01:25:53.393 "psk": "key0" 01:25:53.393 } 01:25:53.393 } 01:25:53.393 Got JSON-RPC error response 01:25:53.393 GoRPCClient: error on JSON-RPC call 01:25:53.393 11:22:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 01:25:53.393 11:22:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:25:53.393 11:22:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:25:53.393 11:22:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:25:53.393 11:22:58 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 01:25:53.393 11:22:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:25:53.651 11:22:58 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@17 -- # name=key0 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@17 -- # digest=0 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@18 -- # mktemp 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EW2KcvW2Dz 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:25:53.651 11:22:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:25:53.651 11:22:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:25:53.651 11:22:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:25:53.651 11:22:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:25:53.651 11:22:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:25:53.651 11:22:58 keyring_file -- nvmf/common.sh@705 -- # python - 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EW2KcvW2Dz 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EW2KcvW2Dz 01:25:53.651 11:22:58 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.EW2KcvW2Dz 01:25:53.651 11:22:58 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EW2KcvW2Dz 01:25:53.651 11:22:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EW2KcvW2Dz 01:25:53.910 11:22:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:53.910 11:22:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:54.167 nvme0n1 01:25:54.168 11:22:59 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 01:25:54.168 11:22:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:54.168 11:22:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:54.168 11:22:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:54.168 11:22:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:54.168 11:22:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:54.426 11:22:59 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 01:25:54.426 11:22:59 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 01:25:54.426 11:22:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:25:54.687 11:22:59 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 01:25:54.687 11:22:59 keyring_file -- keyring/file.sh@101 -- # get_key key0 01:25:54.687 11:22:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:54.687 11:22:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:54.687 11:22:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:54.945 11:23:00 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 01:25:54.945 11:23:00 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 01:25:54.945 11:23:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:54.945 11:23:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:54.945 11:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:54.945 11:23:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:54.945 11:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:55.202 11:23:00 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 01:25:55.202 11:23:00 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:25:55.202 11:23:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:25:55.459 11:23:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 01:25:55.459 11:23:00 keyring_file -- keyring/file.sh@104 -- # jq length 01:25:55.459 11:23:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:55.716 11:23:00 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 01:25:55.716 11:23:00 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EW2KcvW2Dz 01:25:55.716 11:23:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EW2KcvW2Dz 01:25:55.974 11:23:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dInkmxsitq 01:25:55.974 11:23:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dInkmxsitq 01:25:56.231 11:23:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:56.232 11:23:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:25:56.490 nvme0n1 01:25:56.490 11:23:01 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 01:25:56.490 11:23:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:25:56.749 11:23:01 keyring_file -- keyring/file.sh@112 -- # config='{ 01:25:56.749 "subsystems": [ 01:25:56.749 { 01:25:56.749 "subsystem": "keyring", 01:25:56.749 "config": [ 01:25:56.749 { 01:25:56.749 "method": "keyring_file_add_key", 01:25:56.749 "params": { 01:25:56.749 "name": "key0", 01:25:56.749 "path": "/tmp/tmp.EW2KcvW2Dz" 01:25:56.749 } 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "method": "keyring_file_add_key", 01:25:56.749 "params": { 01:25:56.749 "name": "key1", 01:25:56.749 "path": "/tmp/tmp.dInkmxsitq" 01:25:56.749 } 01:25:56.749 } 01:25:56.749 ] 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "subsystem": "iobuf", 01:25:56.749 "config": [ 01:25:56.749 { 01:25:56.749 "method": "iobuf_set_options", 01:25:56.749 "params": { 01:25:56.749 "large_bufsize": 135168, 01:25:56.749 "large_pool_count": 1024, 01:25:56.749 "small_bufsize": 8192, 01:25:56.749 "small_pool_count": 8192 01:25:56.749 } 01:25:56.749 } 01:25:56.749 ] 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "subsystem": "sock", 01:25:56.749 "config": [ 01:25:56.749 { 01:25:56.749 "method": "sock_set_default_impl", 01:25:56.749 "params": { 01:25:56.749 "impl_name": "posix" 01:25:56.749 } 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "method": "sock_impl_set_options", 01:25:56.749 "params": { 01:25:56.749 "enable_ktls": false, 01:25:56.749 "enable_placement_id": 0, 01:25:56.749 "enable_quickack": false, 01:25:56.749 "enable_recv_pipe": true, 01:25:56.749 "enable_zerocopy_send_client": false, 01:25:56.749 "enable_zerocopy_send_server": true, 01:25:56.749 "impl_name": "ssl", 01:25:56.749 "recv_buf_size": 4096, 01:25:56.749 "send_buf_size": 4096, 01:25:56.749 "tls_version": 0, 01:25:56.749 "zerocopy_threshold": 0 01:25:56.749 } 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "method": "sock_impl_set_options", 01:25:56.749 "params": { 01:25:56.749 "enable_ktls": false, 01:25:56.749 "enable_placement_id": 0, 01:25:56.749 "enable_quickack": false, 01:25:56.749 "enable_recv_pipe": true, 01:25:56.749 "enable_zerocopy_send_client": false, 01:25:56.749 "enable_zerocopy_send_server": true, 01:25:56.749 "impl_name": "posix", 01:25:56.749 "recv_buf_size": 2097152, 01:25:56.749 "send_buf_size": 2097152, 01:25:56.749 "tls_version": 0, 01:25:56.749 "zerocopy_threshold": 0 01:25:56.749 } 01:25:56.749 } 01:25:56.749 ] 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "subsystem": "vmd", 01:25:56.749 "config": [] 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "subsystem": "accel", 01:25:56.749 "config": [ 01:25:56.749 { 01:25:56.749 "method": "accel_set_options", 01:25:56.749 "params": { 01:25:56.749 "buf_count": 2048, 01:25:56.749 "large_cache_size": 16, 01:25:56.749 "sequence_count": 2048, 01:25:56.749 "small_cache_size": 128, 01:25:56.749 "task_count": 2048 01:25:56.749 } 01:25:56.749 } 01:25:56.749 ] 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "subsystem": "bdev", 01:25:56.749 "config": [ 01:25:56.749 { 01:25:56.749 "method": "bdev_set_options", 01:25:56.749 "params": { 01:25:56.749 "bdev_auto_examine": true, 01:25:56.749 "bdev_io_cache_size": 256, 01:25:56.749 "bdev_io_pool_size": 65535, 01:25:56.749 "iobuf_large_cache_size": 16, 01:25:56.749 "iobuf_small_cache_size": 128 01:25:56.749 } 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "method": "bdev_raid_set_options", 01:25:56.749 "params": { 01:25:56.749 "process_max_bandwidth_mb_sec": 0, 01:25:56.749 "process_window_size_kb": 1024 01:25:56.749 } 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "method": "bdev_iscsi_set_options", 01:25:56.749 "params": { 01:25:56.749 "timeout_sec": 30 01:25:56.749 } 01:25:56.749 }, 01:25:56.749 { 01:25:56.749 "method": "bdev_nvme_set_options", 01:25:56.749 "params": { 01:25:56.749 "action_on_timeout": "none", 01:25:56.749 "allow_accel_sequence": false, 01:25:56.749 "arbitration_burst": 0, 01:25:56.749 "bdev_retry_count": 3, 01:25:56.749 "ctrlr_loss_timeout_sec": 0, 01:25:56.749 "delay_cmd_submit": true, 01:25:56.749 "dhchap_dhgroups": [ 01:25:56.749 "null", 01:25:56.749 "ffdhe2048", 01:25:56.749 "ffdhe3072", 01:25:56.749 "ffdhe4096", 01:25:56.749 "ffdhe6144", 01:25:56.749 "ffdhe8192" 01:25:56.749 ], 01:25:56.749 "dhchap_digests": [ 01:25:56.749 "sha256", 01:25:56.749 "sha384", 01:25:56.749 "sha512" 01:25:56.749 ], 01:25:56.749 "disable_auto_failback": false, 01:25:56.749 "fast_io_fail_timeout_sec": 0, 01:25:56.749 "generate_uuids": false, 01:25:56.749 "high_priority_weight": 0, 01:25:56.749 "io_path_stat": false, 01:25:56.749 "io_queue_requests": 512, 01:25:56.750 "keep_alive_timeout_ms": 10000, 01:25:56.750 "low_priority_weight": 0, 01:25:56.750 "medium_priority_weight": 0, 01:25:56.750 "nvme_adminq_poll_period_us": 10000, 01:25:56.750 "nvme_error_stat": false, 01:25:56.750 "nvme_ioq_poll_period_us": 0, 01:25:56.750 "rdma_cm_event_timeout_ms": 0, 01:25:56.750 "rdma_max_cq_size": 0, 01:25:56.750 "rdma_srq_size": 0, 01:25:56.750 "reconnect_delay_sec": 0, 01:25:56.750 "timeout_admin_us": 0, 01:25:56.750 "timeout_us": 0, 01:25:56.750 "transport_ack_timeout": 0, 01:25:56.750 "transport_retry_count": 4, 01:25:56.750 "transport_tos": 0 01:25:56.750 } 01:25:56.750 }, 01:25:56.750 { 01:25:56.750 "method": "bdev_nvme_attach_controller", 01:25:56.750 "params": { 01:25:56.750 "adrfam": "IPv4", 01:25:56.750 "ctrlr_loss_timeout_sec": 0, 01:25:56.750 "ddgst": false, 01:25:56.750 "fast_io_fail_timeout_sec": 0, 01:25:56.750 "hdgst": false, 01:25:56.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:25:56.750 "name": "nvme0", 01:25:56.750 "prchk_guard": false, 01:25:56.750 "prchk_reftag": false, 01:25:56.750 "psk": "key0", 01:25:56.750 "reconnect_delay_sec": 0, 01:25:56.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:25:56.750 "traddr": "127.0.0.1", 01:25:56.750 "trsvcid": "4420", 01:25:56.750 "trtype": "TCP" 01:25:56.750 } 01:25:56.750 }, 01:25:56.750 { 01:25:56.750 "method": "bdev_nvme_set_hotplug", 01:25:56.750 "params": { 01:25:56.750 "enable": false, 01:25:56.750 "period_us": 100000 01:25:56.750 } 01:25:56.750 }, 01:25:56.750 { 01:25:56.750 "method": "bdev_wait_for_examine" 01:25:56.750 } 01:25:56.750 ] 01:25:56.750 }, 01:25:56.750 { 01:25:56.750 "subsystem": "nbd", 01:25:56.750 "config": [] 01:25:56.750 } 01:25:56.750 ] 01:25:56.750 }' 01:25:56.750 11:23:01 keyring_file -- keyring/file.sh@114 -- # killprocess 118931 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 118931 ']' 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 118931 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@953 -- # uname 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118931 01:25:56.750 killing process with pid 118931 01:25:56.750 Received shutdown signal, test time was about 1.000000 seconds 01:25:56.750 01:25:56.750 Latency(us) 01:25:56.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:56.750 =================================================================================================================== 01:25:56.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118931' 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@967 -- # kill 118931 01:25:56.750 11:23:01 keyring_file -- common/autotest_common.sh@972 -- # wait 118931 01:25:57.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:25:57.008 11:23:02 keyring_file -- keyring/file.sh@117 -- # bperfpid=119398 01:25:57.008 11:23:02 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:25:57.008 11:23:02 keyring_file -- keyring/file.sh@119 -- # waitforlisten 119398 /var/tmp/bperf.sock 01:25:57.008 11:23:02 keyring_file -- keyring/file.sh@115 -- # echo '{ 01:25:57.008 "subsystems": [ 01:25:57.008 { 01:25:57.008 "subsystem": "keyring", 01:25:57.008 "config": [ 01:25:57.008 { 01:25:57.008 "method": "keyring_file_add_key", 01:25:57.008 "params": { 01:25:57.008 "name": "key0", 01:25:57.008 "path": "/tmp/tmp.EW2KcvW2Dz" 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "keyring_file_add_key", 01:25:57.008 "params": { 01:25:57.008 "name": "key1", 01:25:57.008 "path": "/tmp/tmp.dInkmxsitq" 01:25:57.008 } 01:25:57.008 } 01:25:57.008 ] 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "subsystem": "iobuf", 01:25:57.008 "config": [ 01:25:57.008 { 01:25:57.008 "method": "iobuf_set_options", 01:25:57.008 "params": { 01:25:57.008 "large_bufsize": 135168, 01:25:57.008 "large_pool_count": 1024, 01:25:57.008 "small_bufsize": 8192, 01:25:57.008 "small_pool_count": 8192 01:25:57.008 } 01:25:57.008 } 01:25:57.008 ] 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "subsystem": "sock", 01:25:57.008 "config": [ 01:25:57.008 { 01:25:57.008 "method": "sock_set_default_impl", 01:25:57.008 "params": { 01:25:57.008 "impl_name": "posix" 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "sock_impl_set_options", 01:25:57.008 "params": { 01:25:57.008 "enable_ktls": false, 01:25:57.008 "enable_placement_id": 0, 01:25:57.008 "enable_quickack": false, 01:25:57.008 "enable_recv_pipe": true, 01:25:57.008 "enable_zerocopy_send_client": false, 01:25:57.008 "enable_zerocopy_send_server": true, 01:25:57.008 "impl_name": "ssl", 01:25:57.008 "recv_buf_size": 4096, 01:25:57.008 "send_buf_size": 4096, 01:25:57.008 "tls_version": 0, 01:25:57.008 "zerocopy_threshold": 0 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "sock_impl_set_options", 01:25:57.008 "params": { 01:25:57.008 "enable_ktls": false, 01:25:57.008 "enable_placement_id": 0, 01:25:57.008 "enable_quickack": false, 01:25:57.008 "enable_recv_pipe": true, 01:25:57.008 "enable_zerocopy_send_client": false, 01:25:57.008 "enable_zerocopy_send_server": true, 01:25:57.008 "impl_name": "posix", 01:25:57.008 "recv_buf_size": 2097152, 01:25:57.008 "send_buf_size": 2097152, 01:25:57.008 "tls_version": 0, 01:25:57.008 "zerocopy_threshold": 0 01:25:57.008 } 01:25:57.008 } 01:25:57.008 ] 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "subsystem": "vmd", 01:25:57.008 "config": [] 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "subsystem": "accel", 01:25:57.008 "config": [ 01:25:57.008 { 01:25:57.008 "method": "accel_set_options", 01:25:57.008 "params": { 01:25:57.008 "buf_count": 2048, 01:25:57.008 "large_cache_size": 16, 01:25:57.008 "sequence_count": 2048, 01:25:57.008 "small_cache_size": 128, 01:25:57.008 "task_count": 2048 01:25:57.008 } 01:25:57.008 } 01:25:57.008 ] 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "subsystem": "bdev", 01:25:57.008 "config": [ 01:25:57.008 { 01:25:57.008 "method": "bdev_set_options", 01:25:57.008 "params": { 01:25:57.008 "bdev_auto_examine": true, 01:25:57.008 "bdev_io_cache_size": 256, 01:25:57.008 "bdev_io_pool_size": 65535, 01:25:57.008 "iobuf_large_cache_size": 16, 01:25:57.008 "iobuf_small_cache_size": 128 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "bdev_raid_set_options", 01:25:57.008 "params": { 01:25:57.008 "process_max_bandwidth_mb_sec": 0, 01:25:57.008 "process_window_size_kb": 1024 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "bdev_iscsi_set_options", 01:25:57.008 "params": { 01:25:57.008 "timeout_sec": 30 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "bdev_nvme_set_options", 01:25:57.008 "params": { 01:25:57.008 "action_on_timeout": "none", 01:25:57.008 "allow_accel_sequence": false, 01:25:57.008 "arbitration_burst": 0, 01:25:57.008 "bdev_retry_count": 3, 01:25:57.008 "ctrlr_loss_timeout_sec": 0, 01:25:57.008 "delay_cmd_submit": true, 01:25:57.008 "dhchap_dhgroups": [ 01:25:57.008 "null", 01:25:57.008 "ffdhe2048", 01:25:57.008 "ffdhe3072", 01:25:57.008 "ffdhe4096", 01:25:57.008 "ffdhe6144", 01:25:57.008 "ffdhe8192" 01:25:57.008 ], 01:25:57.008 "dhchap_digests": [ 01:25:57.008 "sha256", 01:25:57.008 "sha384", 01:25:57.008 "sha512" 01:25:57.008 ], 01:25:57.008 "disable_auto_failback": false, 01:25:57.008 "fast_io_fail_timeout_sec": 0, 01:25:57.008 "generate_uuids": false, 01:25:57.008 "high_priority_weight": 0, 01:25:57.008 "io_path_stat": false, 01:25:57.008 "io_queue_requests": 512, 01:25:57.008 "keep_alive_timeout_ms": 10000, 01:25:57.008 "low_priority_weight": 0, 01:25:57.008 "medium_priority_weight": 0, 01:25:57.008 "nvme_adminq_poll_period_us": 10000, 01:25:57.008 "nvme_error_stat": false, 01:25:57.008 "nvme_ioq_poll_period_us": 0, 01:25:57.008 "rdma_cm_event_timeout_ms": 0, 01:25:57.008 "rdma_max_cq_size": 0, 01:25:57.008 "rdma_srq_size": 0, 01:25:57.008 "reconnect_delay_sec": 0, 01:25:57.008 "timeout_admin_us": 0, 01:25:57.008 "timeout_us": 0, 01:25:57.008 "transport_ack_timeout": 0, 01:25:57.008 "transport_retry_count": 4, 01:25:57.008 "transport_tos": 0 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "bdev_nvme_attach_controller", 01:25:57.008 "params": { 01:25:57.008 "adrfam": "IPv4", 01:25:57.008 "ctrlr_loss_timeout_sec": 0, 01:25:57.008 "ddgst": false, 01:25:57.008 "fast_io_fail_timeout_sec": 0, 01:25:57.008 "hdgst": false, 01:25:57.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:25:57.008 "name": "nvme0", 01:25:57.008 "prchk_guard": false, 01:25:57.008 "prchk_reftag": false, 01:25:57.008 "psk": "key0", 01:25:57.008 "reconnect_delay_sec": 0, 01:25:57.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:25:57.008 "traddr": "127.0.0.1", 01:25:57.008 "trsvcid": "4420", 01:25:57.008 "trtype": "TCP" 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "bdev_nvme_set_hotplug", 01:25:57.008 "params": { 01:25:57.008 "enable": false, 01:25:57.008 "period_us": 100000 01:25:57.008 } 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "method": "bdev_wait_for_examine" 01:25:57.008 } 01:25:57.008 ] 01:25:57.008 }, 01:25:57.008 { 01:25:57.008 "subsystem": "nbd", 01:25:57.008 "config": [] 01:25:57.008 } 01:25:57.008 ] 01:25:57.008 }' 01:25:57.008 11:23:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119398 ']' 01:25:57.008 11:23:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:25:57.008 11:23:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 01:25:57.008 11:23:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:25:57.008 11:23:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 01:25:57.008 11:23:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:25:57.008 [2024-07-22 11:23:02.145063] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:25:57.009 [2024-07-22 11:23:02.145167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119398 ] 01:25:57.266 [2024-07-22 11:23:02.279156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:57.266 [2024-07-22 11:23:02.359468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:25:57.525 [2024-07-22 11:23:02.535622] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:58.090 11:23:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:25:58.090 11:23:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 01:25:58.090 11:23:03 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 01:25:58.090 11:23:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:58.090 11:23:03 keyring_file -- keyring/file.sh@120 -- # jq length 01:25:58.347 11:23:03 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 01:25:58.347 11:23:03 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 01:25:58.347 11:23:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:25:58.347 11:23:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:58.347 11:23:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:58.347 11:23:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:58.347 11:23:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:25:58.603 11:23:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:25:58.603 11:23:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 01:25:58.603 11:23:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:25:58.603 11:23:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:25:58.603 11:23:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:25:58.603 11:23:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:25:58.603 11:23:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:25:58.860 11:23:03 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 01:25:58.860 11:23:03 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 01:25:58.860 11:23:03 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 01:25:58.860 11:23:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:25:59.117 11:23:04 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 01:25:59.117 11:23:04 keyring_file -- keyring/file.sh@1 -- # cleanup 01:25:59.117 11:23:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.EW2KcvW2Dz /tmp/tmp.dInkmxsitq 01:25:59.117 11:23:04 keyring_file -- keyring/file.sh@20 -- # killprocess 119398 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119398 ']' 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119398 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@953 -- # uname 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119398 01:25:59.117 killing process with pid 119398 01:25:59.117 Received shutdown signal, test time was about 1.000000 seconds 01:25:59.117 01:25:59.117 Latency(us) 01:25:59.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:59.117 =================================================================================================================== 01:25:59.117 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119398' 01:25:59.117 11:23:04 keyring_file -- common/autotest_common.sh@967 -- # kill 119398 01:25:59.118 11:23:04 keyring_file -- common/autotest_common.sh@972 -- # wait 119398 01:25:59.375 11:23:04 keyring_file -- keyring/file.sh@21 -- # killprocess 118896 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 118896 ']' 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 118896 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@953 -- # uname 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118896 01:25:59.375 killing process with pid 118896 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118896' 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@967 -- # kill 118896 01:25:59.375 [2024-07-22 11:23:04.408960] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:25:59.375 11:23:04 keyring_file -- common/autotest_common.sh@972 -- # wait 118896 01:25:59.959 01:25:59.959 real 0m15.433s 01:25:59.959 user 0m37.837s 01:25:59.959 sys 0m3.241s 01:25:59.959 ************************************ 01:25:59.959 END TEST keyring_file 01:25:59.959 ************************************ 01:25:59.959 11:23:04 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 01:25:59.959 11:23:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:25:59.959 11:23:04 -- common/autotest_common.sh@1142 -- # return 0 01:25:59.959 11:23:04 -- spdk/autotest.sh@296 -- # [[ y == y ]] 01:25:59.959 11:23:04 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:25:59.959 11:23:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 01:25:59.959 11:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 01:25:59.959 11:23:04 -- common/autotest_common.sh@10 -- # set +x 01:25:59.959 ************************************ 01:25:59.959 START TEST keyring_linux 01:25:59.959 ************************************ 01:25:59.959 11:23:05 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:25:59.959 * Looking for test storage... 01:25:59.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:25:59.959 11:23:05 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:25:59.959 11:23:05 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:25:59.959 11:23:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8977fc08-3b30-49e8-886e-3a1f0545f479 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8977fc08-3b30-49e8-886e-3a1f0545f479 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:25:59.960 11:23:05 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:25:59.960 11:23:05 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:25:59.960 11:23:05 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:25:59.960 11:23:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:59.960 11:23:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:59.960 11:23:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:59.960 11:23:05 keyring_linux -- paths/export.sh@5 -- # export PATH 01:25:59.960 11:23:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@47 -- # : 0 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:25:59.960 11:23:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:25:59.960 11:23:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:25:59.960 11:23:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:25:59.960 11:23:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:25:59.960 11:23:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:25:59.960 11:23:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@705 -- # python - 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:25:59.960 /tmp/:spdk-test:key0 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:25:59.960 11:23:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:25:59.960 11:23:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:25:59.960 11:23:05 keyring_linux -- nvmf/common.sh@705 -- # python - 01:26:00.217 11:23:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:26:00.217 /tmp/:spdk-test:key1 01:26:00.217 11:23:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:26:00.217 11:23:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=119551 01:26:00.217 11:23:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 119551 01:26:00.217 11:23:05 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:26:00.217 11:23:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119551 ']' 01:26:00.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:00.217 11:23:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:00.217 11:23:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 01:26:00.217 11:23:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:00.217 11:23:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 01:26:00.217 11:23:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:26:00.217 [2024-07-22 11:23:05.255836] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:26:00.217 [2024-07-22 11:23:05.255949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119551 ] 01:26:00.217 [2024-07-22 11:23:05.387204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:00.475 [2024-07-22 11:23:05.488405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 01:26:01.041 11:23:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:26:01.041 11:23:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 01:26:01.041 11:23:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:26:01.041 11:23:06 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 01:26:01.041 11:23:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:26:01.041 [2024-07-22 11:23:06.244637] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:01.299 null0 01:26:01.299 [2024-07-22 11:23:06.276601] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:26:01.299 [2024-07-22 11:23:06.276864] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:26:01.299 11:23:06 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 01:26:01.299 11:23:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:26:01.299 537772378 01:26:01.299 11:23:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:26:01.299 984721164 01:26:01.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:26:01.299 11:23:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=119584 01:26:01.299 11:23:06 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:26:01.299 11:23:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 119584 /var/tmp/bperf.sock 01:26:01.299 11:23:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119584 ']' 01:26:01.299 11:23:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 01:26:01.299 11:23:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 01:26:01.299 11:23:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:26:01.299 11:23:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 01:26:01.299 11:23:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:26:01.299 [2024-07-22 11:23:06.358838] Starting SPDK v24.09-pre git sha1 8fb860b73 / DPDK 23.11.0 initialization... 01:26:01.299 [2024-07-22 11:23:06.358949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119584 ] 01:26:01.299 [2024-07-22 11:23:06.498240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:01.558 [2024-07-22 11:23:06.585434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 01:26:02.172 11:23:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 01:26:02.172 11:23:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 01:26:02.172 11:23:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:26:02.172 11:23:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:26:02.435 11:23:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:26:02.435 11:23:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:26:02.693 11:23:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:26:02.693 11:23:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:26:02.951 [2024-07-22 11:23:08.004110] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:26:02.951 nvme0n1 01:26:02.951 11:23:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:26:02.951 11:23:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:26:02.951 11:23:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:26:02.951 11:23:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:26:02.951 11:23:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:26:02.951 11:23:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:26:03.209 11:23:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:26:03.209 11:23:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:26:03.209 11:23:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:26:03.209 11:23:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:26:03.209 11:23:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:26:03.209 11:23:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:26:03.209 11:23:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:26:03.467 11:23:08 keyring_linux -- keyring/linux.sh@25 -- # sn=537772378 01:26:03.467 11:23:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:26:03.467 11:23:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:26:03.467 11:23:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 537772378 == \5\3\7\7\7\2\3\7\8 ]] 01:26:03.467 11:23:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 537772378 01:26:03.467 11:23:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:26:03.467 11:23:08 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:26:03.725 Running I/O for 1 seconds... 01:26:04.658 01:26:04.658 Latency(us) 01:26:04.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:04.658 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:26:04.658 nvme0n1 : 1.01 10137.60 39.60 0.00 0.00 12535.73 10485.76 22758.87 01:26:04.658 =================================================================================================================== 01:26:04.658 Total : 10137.60 39.60 0.00 0.00 12535.73 10485.76 22758.87 01:26:04.658 0 01:26:04.658 11:23:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:26:04.658 11:23:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:26:04.916 11:23:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:26:04.916 11:23:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:26:04.916 11:23:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:26:04.916 11:23:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:26:04.916 11:23:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:26:04.916 11:23:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:26:05.175 11:23:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:26:05.175 11:23:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:26:05.175 11:23:10 keyring_linux -- keyring/linux.sh@23 -- # return 01:26:05.175 11:23:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:26:05.175 11:23:10 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 01:26:05.175 11:23:10 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:26:05.175 11:23:10 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 01:26:05.175 11:23:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:26:05.175 11:23:10 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 01:26:05.175 11:23:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 01:26:05.175 11:23:10 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:26:05.175 11:23:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:26:05.434 [2024-07-22 11:23:10.571394] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:26:05.434 [2024-07-22 11:23:10.571878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22485b0 (107): Transport endpoint is not connected 01:26:05.434 [2024-07-22 11:23:10.572860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22485b0 (9): Bad file descriptor 01:26:05.435 [2024-07-22 11:23:10.573857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:26:05.435 [2024-07-22 11:23:10.573888] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:26:05.435 [2024-07-22 11:23:10.573898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:26:05.435 2024/07/22 11:23:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:26:05.435 request: 01:26:05.435 { 01:26:05.435 "method": "bdev_nvme_attach_controller", 01:26:05.435 "params": { 01:26:05.435 "name": "nvme0", 01:26:05.435 "trtype": "tcp", 01:26:05.435 "traddr": "127.0.0.1", 01:26:05.435 "adrfam": "ipv4", 01:26:05.435 "trsvcid": "4420", 01:26:05.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:26:05.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:26:05.435 "prchk_reftag": false, 01:26:05.435 "prchk_guard": false, 01:26:05.435 "hdgst": false, 01:26:05.435 "ddgst": false, 01:26:05.435 "psk": ":spdk-test:key1" 01:26:05.435 } 01:26:05.435 } 01:26:05.435 Got JSON-RPC error response 01:26:05.435 GoRPCClient: error on JSON-RPC call 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@651 -- # es=1 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@33 -- # sn=537772378 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 537772378 01:26:05.435 1 links removed 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@33 -- # sn=984721164 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 984721164 01:26:05.435 1 links removed 01:26:05.435 11:23:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 119584 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119584 ']' 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119584 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119584 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 01:26:05.435 killing process with pid 119584 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119584' 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 119584 01:26:05.435 Received shutdown signal, test time was about 1.000000 seconds 01:26:05.435 01:26:05.435 Latency(us) 01:26:05.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:05.435 =================================================================================================================== 01:26:05.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:26:05.435 11:23:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 119584 01:26:05.694 11:23:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 119551 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119551 ']' 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119551 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119551 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119551' 01:26:05.694 killing process with pid 119551 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 119551 01:26:05.694 11:23:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 119551 01:26:06.260 01:26:06.260 real 0m6.442s 01:26:06.260 user 0m11.936s 01:26:06.260 sys 0m1.815s 01:26:06.260 11:23:11 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 01:26:06.260 11:23:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:26:06.260 ************************************ 01:26:06.260 END TEST keyring_linux 01:26:06.260 ************************************ 01:26:06.519 11:23:11 -- common/autotest_common.sh@1142 -- # return 0 01:26:06.519 11:23:11 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 01:26:06.519 11:23:11 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 01:26:06.519 11:23:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 01:26:06.519 11:23:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 01:26:06.519 11:23:11 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 01:26:06.519 11:23:11 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 01:26:06.519 11:23:11 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 01:26:06.519 11:23:11 -- common/autotest_common.sh@722 -- # xtrace_disable 01:26:06.519 11:23:11 -- common/autotest_common.sh@10 -- # set +x 01:26:06.519 11:23:11 -- spdk/autotest.sh@383 -- # autotest_cleanup 01:26:06.519 11:23:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 01:26:06.519 11:23:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 01:26:06.519 11:23:11 -- common/autotest_common.sh@10 -- # set +x 01:26:07.891 INFO: APP EXITING 01:26:07.891 INFO: killing all VMs 01:26:07.891 INFO: killing vhost app 01:26:07.891 INFO: EXIT DONE 01:26:08.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:26:08.822 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:26:08.822 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:26:09.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:26:09.405 Cleaning 01:26:09.405 Removing: /var/run/dpdk/spdk0/config 01:26:09.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:26:09.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:26:09.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:26:09.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:26:09.405 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:26:09.405 Removing: /var/run/dpdk/spdk0/hugepage_info 01:26:09.405 Removing: /var/run/dpdk/spdk1/config 01:26:09.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:26:09.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:26:09.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:26:09.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:26:09.405 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:26:09.405 Removing: /var/run/dpdk/spdk1/hugepage_info 01:26:09.405 Removing: /var/run/dpdk/spdk2/config 01:26:09.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:26:09.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:26:09.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:26:09.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:26:09.405 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:26:09.405 Removing: /var/run/dpdk/spdk2/hugepage_info 01:26:09.405 Removing: /var/run/dpdk/spdk3/config 01:26:09.405 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:26:09.405 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:26:09.405 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:26:09.405 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:26:09.405 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:26:09.405 Removing: /var/run/dpdk/spdk3/hugepage_info 01:26:09.405 Removing: /var/run/dpdk/spdk4/config 01:26:09.405 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:26:09.405 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:26:09.405 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:26:09.405 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:26:09.405 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:26:09.405 Removing: /var/run/dpdk/spdk4/hugepage_info 01:26:09.405 Removing: /dev/shm/nvmf_trace.0 01:26:09.405 Removing: /dev/shm/spdk_tgt_trace.pid73282 01:26:09.405 Removing: /var/run/dpdk/spdk0 01:26:09.663 Removing: /var/run/dpdk/spdk1 01:26:09.664 Removing: /var/run/dpdk/spdk2 01:26:09.664 Removing: /var/run/dpdk/spdk3 01:26:09.664 Removing: /var/run/dpdk/spdk4 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100094 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100140 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100172 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100212 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100357 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100504 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100756 01:26:09.664 Removing: /var/run/dpdk/spdk_pid100879 01:26:09.664 Removing: /var/run/dpdk/spdk_pid101121 01:26:09.664 Removing: /var/run/dpdk/spdk_pid101241 01:26:09.664 Removing: /var/run/dpdk/spdk_pid101356 01:26:09.664 Removing: /var/run/dpdk/spdk_pid101696 01:26:09.664 Removing: /var/run/dpdk/spdk_pid102058 01:26:09.664 Removing: /var/run/dpdk/spdk_pid102066 01:26:09.664 Removing: /var/run/dpdk/spdk_pid104283 01:26:09.664 Removing: /var/run/dpdk/spdk_pid104586 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105079 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105081 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105406 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105426 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105440 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105465 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105477 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105620 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105622 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105725 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105733 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105840 01:26:09.664 Removing: /var/run/dpdk/spdk_pid105843 01:26:09.664 Removing: /var/run/dpdk/spdk_pid106313 01:26:09.664 Removing: /var/run/dpdk/spdk_pid106356 01:26:09.664 Removing: /var/run/dpdk/spdk_pid106512 01:26:09.664 Removing: /var/run/dpdk/spdk_pid106629 01:26:09.664 Removing: /var/run/dpdk/spdk_pid107007 01:26:09.664 Removing: /var/run/dpdk/spdk_pid107251 01:26:09.664 Removing: /var/run/dpdk/spdk_pid107721 01:26:09.664 Removing: /var/run/dpdk/spdk_pid108288 01:26:09.664 Removing: /var/run/dpdk/spdk_pid109583 01:26:09.664 Removing: /var/run/dpdk/spdk_pid110172 01:26:09.664 Removing: /var/run/dpdk/spdk_pid110180 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112089 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112166 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112233 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112323 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112482 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112567 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112652 01:26:09.664 Removing: /var/run/dpdk/spdk_pid112729 01:26:09.664 Removing: /var/run/dpdk/spdk_pid113071 01:26:09.664 Removing: /var/run/dpdk/spdk_pid113745 01:26:09.664 Removing: /var/run/dpdk/spdk_pid115080 01:26:09.664 Removing: /var/run/dpdk/spdk_pid115280 01:26:09.664 Removing: /var/run/dpdk/spdk_pid115559 01:26:09.664 Removing: /var/run/dpdk/spdk_pid115853 01:26:09.664 Removing: /var/run/dpdk/spdk_pid116402 01:26:09.664 Removing: /var/run/dpdk/spdk_pid116407 01:26:09.664 Removing: /var/run/dpdk/spdk_pid116759 01:26:09.664 Removing: /var/run/dpdk/spdk_pid116917 01:26:09.664 Removing: /var/run/dpdk/spdk_pid117072 01:26:09.664 Removing: /var/run/dpdk/spdk_pid117169 01:26:09.664 Removing: /var/run/dpdk/spdk_pid117319 01:26:09.664 Removing: /var/run/dpdk/spdk_pid117424 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118093 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118129 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118164 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118413 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118447 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118478 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118896 01:26:09.664 Removing: /var/run/dpdk/spdk_pid118931 01:26:09.664 Removing: /var/run/dpdk/spdk_pid119398 01:26:09.664 Removing: /var/run/dpdk/spdk_pid119551 01:26:09.664 Removing: /var/run/dpdk/spdk_pid119584 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73131 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73282 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73543 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73630 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73675 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73779 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73809 01:26:09.664 Removing: /var/run/dpdk/spdk_pid73937 01:26:09.664 Removing: /var/run/dpdk/spdk_pid74212 01:26:09.664 Removing: /var/run/dpdk/spdk_pid74388 01:26:09.922 Removing: /var/run/dpdk/spdk_pid74470 01:26:09.922 Removing: /var/run/dpdk/spdk_pid74562 01:26:09.922 Removing: /var/run/dpdk/spdk_pid74652 01:26:09.922 Removing: /var/run/dpdk/spdk_pid74690 01:26:09.922 Removing: /var/run/dpdk/spdk_pid74720 01:26:09.922 Removing: /var/run/dpdk/spdk_pid74782 01:26:09.922 Removing: /var/run/dpdk/spdk_pid74900 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75524 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75588 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75657 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75685 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75764 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75792 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75882 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75910 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75962 01:26:09.922 Removing: /var/run/dpdk/spdk_pid75992 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76038 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76069 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76215 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76251 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76325 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76395 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76421 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76483 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76518 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76552 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76587 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76616 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76656 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76685 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76725 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76754 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76794 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76823 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76863 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76892 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76921 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76961 01:26:09.922 Removing: /var/run/dpdk/spdk_pid76990 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77030 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77062 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77105 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77135 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77175 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77240 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77351 01:26:09.922 Removing: /var/run/dpdk/spdk_pid77767 01:26:09.922 Removing: /var/run/dpdk/spdk_pid84447 01:26:09.922 Removing: /var/run/dpdk/spdk_pid84789 01:26:09.922 Removing: /var/run/dpdk/spdk_pid87177 01:26:09.922 Removing: /var/run/dpdk/spdk_pid87561 01:26:09.922 Removing: /var/run/dpdk/spdk_pid87828 01:26:09.922 Removing: /var/run/dpdk/spdk_pid87872 01:26:09.922 Removing: /var/run/dpdk/spdk_pid88505 01:26:09.922 Removing: /var/run/dpdk/spdk_pid88947 01:26:09.922 Removing: /var/run/dpdk/spdk_pid88996 01:26:09.922 Removing: /var/run/dpdk/spdk_pid89355 01:26:09.922 Removing: /var/run/dpdk/spdk_pid89881 01:26:09.922 Removing: /var/run/dpdk/spdk_pid90326 01:26:09.922 Removing: /var/run/dpdk/spdk_pid91292 01:26:09.922 Removing: /var/run/dpdk/spdk_pid92262 01:26:09.922 Removing: /var/run/dpdk/spdk_pid92379 01:26:09.922 Removing: /var/run/dpdk/spdk_pid92447 01:26:09.922 Removing: /var/run/dpdk/spdk_pid93919 01:26:09.922 Removing: /var/run/dpdk/spdk_pid94146 01:26:09.922 Removing: /var/run/dpdk/spdk_pid99413 01:26:09.922 Removing: /var/run/dpdk/spdk_pid99840 01:26:09.922 Removing: /var/run/dpdk/spdk_pid99948 01:26:09.922 Clean 01:26:10.180 11:23:15 -- common/autotest_common.sh@1451 -- # return 0 01:26:10.180 11:23:15 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 01:26:10.180 11:23:15 -- common/autotest_common.sh@728 -- # xtrace_disable 01:26:10.180 11:23:15 -- common/autotest_common.sh@10 -- # set +x 01:26:10.180 11:23:15 -- spdk/autotest.sh@386 -- # timing_exit autotest 01:26:10.180 11:23:15 -- common/autotest_common.sh@728 -- # xtrace_disable 01:26:10.180 11:23:15 -- common/autotest_common.sh@10 -- # set +x 01:26:10.180 11:23:15 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:26:10.180 11:23:15 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:26:10.180 11:23:15 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:26:10.180 11:23:15 -- spdk/autotest.sh@391 -- # hash lcov 01:26:10.180 11:23:15 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 01:26:10.180 11:23:15 -- spdk/autotest.sh@393 -- # hostname 01:26:10.180 11:23:15 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:26:10.439 geninfo: WARNING: invalid characters removed from testname! 01:26:37.002 11:23:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:26:37.002 11:23:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:26:39.529 11:23:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:26:42.125 11:23:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:26:44.655 11:23:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:26:47.183 11:23:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:26:49.710 11:23:54 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:26:49.710 11:23:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:49.710 11:23:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 01:26:49.710 11:23:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:49.710 11:23:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:49.710 11:23:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.710 11:23:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.710 11:23:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.710 11:23:54 -- paths/export.sh@5 -- $ export PATH 01:26:49.710 11:23:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.710 11:23:54 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:26:49.710 11:23:54 -- common/autobuild_common.sh@447 -- $ date +%s 01:26:49.710 11:23:54 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721647434.XXXXXX 01:26:49.710 11:23:54 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721647434.nmMKQI 01:26:49.710 11:23:54 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 01:26:49.710 11:23:54 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 01:26:49.710 11:23:54 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 01:26:49.710 11:23:54 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 01:26:49.710 11:23:54 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:26:49.710 11:23:54 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:26:49.710 11:23:54 -- common/autobuild_common.sh@463 -- $ get_config_params 01:26:49.710 11:23:54 -- common/autotest_common.sh@396 -- $ xtrace_disable 01:26:49.710 11:23:54 -- common/autotest_common.sh@10 -- $ set +x 01:26:49.710 11:23:54 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 01:26:49.710 11:23:54 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 01:26:49.710 11:23:54 -- pm/common@17 -- $ local monitor 01:26:49.710 11:23:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:26:49.710 11:23:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:26:49.710 11:23:54 -- pm/common@21 -- $ date +%s 01:26:49.710 11:23:54 -- pm/common@25 -- $ sleep 1 01:26:49.710 11:23:54 -- pm/common@21 -- $ date +%s 01:26:49.710 11:23:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721647434 01:26:49.710 11:23:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721647434 01:26:49.710 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721647434_collect-vmstat.pm.log 01:26:49.710 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721647434_collect-cpu-load.pm.log 01:26:50.643 11:23:55 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 01:26:50.643 11:23:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 01:26:50.643 11:23:55 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 01:26:50.643 11:23:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 01:26:50.643 11:23:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 01:26:50.643 11:23:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 01:26:50.643 11:23:55 -- spdk/autopackage.sh@19 -- $ timing_finish 01:26:50.643 11:23:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:26:50.643 11:23:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 01:26:50.643 11:23:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:26:50.643 11:23:55 -- spdk/autopackage.sh@20 -- $ exit 0 01:26:50.643 11:23:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 01:26:50.643 11:23:55 -- pm/common@29 -- $ signal_monitor_resources TERM 01:26:50.643 11:23:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:26:50.643 11:23:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:26:50.643 11:23:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:26:50.644 11:23:55 -- pm/common@44 -- $ pid=121321 01:26:50.644 11:23:55 -- pm/common@50 -- $ kill -TERM 121321 01:26:50.644 11:23:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:26:50.644 11:23:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:26:50.644 11:23:55 -- pm/common@44 -- $ pid=121323 01:26:50.644 11:23:55 -- pm/common@50 -- $ kill -TERM 121323 01:26:50.644 + [[ -n 6002 ]] 01:26:50.644 + sudo kill 6002 01:26:50.652 [Pipeline] } 01:26:50.672 [Pipeline] // timeout 01:26:50.677 [Pipeline] } 01:26:50.693 [Pipeline] // stage 01:26:50.697 [Pipeline] } 01:26:50.715 [Pipeline] // catchError 01:26:50.721 [Pipeline] stage 01:26:50.723 [Pipeline] { (Stop VM) 01:26:50.733 [Pipeline] sh 01:26:51.006 + vagrant halt 01:26:54.285 ==> default: Halting domain... 01:27:00.854 [Pipeline] sh 01:27:01.132 + vagrant destroy -f 01:27:03.660 ==> default: Removing domain... 01:27:04.239 [Pipeline] sh 01:27:04.517 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 01:27:04.528 [Pipeline] } 01:27:04.547 [Pipeline] // stage 01:27:04.553 [Pipeline] } 01:27:04.572 [Pipeline] // dir 01:27:04.578 [Pipeline] } 01:27:04.598 [Pipeline] // wrap 01:27:04.604 [Pipeline] } 01:27:04.620 [Pipeline] // catchError 01:27:04.630 [Pipeline] stage 01:27:04.632 [Pipeline] { (Epilogue) 01:27:04.648 [Pipeline] sh 01:27:04.928 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:27:10.223 [Pipeline] catchError 01:27:10.225 [Pipeline] { 01:27:10.237 [Pipeline] sh 01:27:10.513 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:27:10.513 Artifacts sizes are good 01:27:10.522 [Pipeline] } 01:27:10.542 [Pipeline] // catchError 01:27:10.552 [Pipeline] archiveArtifacts 01:27:10.559 Archiving artifacts 01:27:10.737 [Pipeline] cleanWs 01:27:10.787 [WS-CLEANUP] Deleting project workspace... 01:27:10.787 [WS-CLEANUP] Deferred wipeout is used... 01:27:10.792 [WS-CLEANUP] done 01:27:10.793 [Pipeline] } 01:27:10.810 [Pipeline] // stage 01:27:10.815 [Pipeline] } 01:27:10.828 [Pipeline] // node 01:27:10.833 [Pipeline] End of Pipeline 01:27:10.953 Finished: SUCCESS